About Me

My photo
Dhaka, Dhaka, Bangladesh
✔4x Salesforce Certified ✔Web Application Developer ✔Database Developer with DWH/ETL/BI • Successful solution engineer and developer with 16+ years of experience multiple technology and in different countries. Proficient in implementing business requirements into the technical solution. Experience handling all phases of the project lifecycle from discovery through to deployment. Worked as technical lead for a team of junior developers to make sure code stayed in line with requirements and standard best practices. Skilled at integrating disparate systems with Salesforce.Experience implementing Salesforce Community Cloud at two previous companies.

Wednesday, February 27, 2013

MYSQL: Query Execution Basics

MYSQL: Query Execution Basics


1. The client sends the SQL statement to the server. 


The protocol is half-duplex

 which means that at any given time the MySQL server can be either sending or receiving messages, but not both. It also means there is no way to cut a message short. The client sends a query to the server as a single packet of data. This is why the max_packet_size configuration variable is important if you have large queries. Once the client sends the query, it doesn’t have the ball anymore; it can only wait for results. The response from the server usually consists of many packets of data.
 When the server responds, the client has to receive the entire result set. It cannot simply fetch a few rows and then ask the server not to bother sending the rest. If the client needs only the first few rows that are returned, it either has to wait for all of the server’s packets to arrive and then discard the ones it doesn’t need, or disconnect ungracefully.
Neither is a good idea, which is why appropriate LIMIT clauses are so important.Here’s another way to think about this: when a client fetches rows from the server, it thinks it’s pulling them. But the truth is, the MySQL server is pushing the rows as it generates them. The client is only receiving the pushed rows; there is no way for it to tell the server to stop sending rows. The client is “drinking from the fire hose,” so to speak.

2. The server checks the query cache. If there’s a hit, it returns the stored result from the cache; otherwise, it passes the SQL statement to the next step


Before even parsing a query

MySQL checks for it in the query cache, if the cache is enabled. This operation is a case sensitive hash lookup. If the query differs from a similar query in the cache by even a single byte, it won’t match, and the query processing will go to the next stage. If MySQL does find a match in the query cache, it must check privileges before returning the cached query. This is possible without parsing the query, because MySQL stores table information with the cached query. If the privileges are OK, MySQL retrieves the stored result from the query cache and sends it to the client, bypassing every other stage in query execution. The query is never parsed, optimized, or executed.

 3. The server parses, pre-processes, and optimizes the SQL into a query execution plan.

 MySQL’s parser breaks the query into tokens and builds a “parse tree” from them.

 The parser uses MySQL’s SQL grammar to interpret and validate the query. For instance, it ensures that the tokens in the query are valid and in the proper order, and it checks for mistakes such as quoted strings that aren’t terminated. The preprocessor then checks the resulting parse tree for additional semantics that the parser can’t resolve. For example, it checks that tables and columns exist, and it resolves names and aliases to ensure that column references aren’t ambiguous.Next, the preprocessor checks privileges. This is normally very fast unless your server has large numbers of privileges.

The parse tree is now valid and ready for the optimizer to turn it into a query execution plan.

A query can often be executed many different ways and produce the same result. The optimizer’s job is to find the best option. MySQL uses a cost-based optimizer, which means it tries to predict the cost of various execution plans and choose the least expensive. The unit of cost is a single random four-kilobyte data page read.

 4. The query execution engine executes the plan by making calls to the storage engine API. 


The parsing and optimizing stage outputs a query execution plan

which MySQL’s query execution engine uses to process the query. The plan is a data structure; it is not executable byte-code, which is how many other databases execute queries. In contrast to the optimization stage, the execution stage is usually not all that complex: MySQL simply follows the instructions given in the query execution plan. Many of the operations in the plan invoke methods implemented by the storage engine interface, also known as the handler API. Each table in the query is represented by an instance of a handler. If a table appears three times in the query, for example, the server creates three handler instances. Though we glossed over this before, MySQL actually creates the handler instances early in the optimization stage. The optimizer uses them to get information about the tables, such as their column names and index statistics.

5. The server sends the result to the client.

The final step in executing a query is to reply to the client.

 Even queries that don’t return a result set still reply to the client connection with information about the query, such as how many rows it affected.

MySQL Tips : Calculate database and table size

Few MySQL Tips: a. calculate databases size : 

It will show database size 

 SELECT s.schema_name, CONCAT(IFNULL(ROUND(SUM(t.data_length)/1024/1024,2),0.00),"Mb") as Data_size, CONCAT(IFNULL(ROUND(SUM(t.index_length)/1024/1024,2),0.00),"Mb") as Index_size,COUNT(table_name) total_tables FROM INFORMATION_SCHEMA.SCHEMATA s LEFT JOIN INFORMATION_SCHEMA.TABLES t ON s.schema_name = t.table_schema WHERE s.schema_name not in("mysql","information_schema","test") GROUP BY s.schema_name order by Data_size DESC;

It will show database size along with table size. 

SELECT s.schema_name,t.table_name, CONCAT(IFNULL(ROUND(SUM(t.data_length)/1024/1024,2),0.00),"Mb") data_size,CONCAT(IFNULL(ROUND(SUM(t.index_length)/1024/1024,2),0.00),"Mb") index_size, t.ENGINE ENGINE, t.table_rows TABLE_ROWS,t.row_format TABLE_ROW_FORMAT,date(t.update_time) FROM INFORMATION_SCHEMA.SCHEMATA s LEFT JOIN INFORMATION_SCHEMA.TABLES t ON s.schema_name = t.table_schema WHERE s.schema_name not in ("mysql","information_schema") GROUP BY s.schema_name,t.table_name,TABLE_ROW_FORMAT,ENGINE ORDER BY TABLE_ROWS DESC,data_size DESC,index_size DESC;

It will show table size of engine 'MEMORY'. 


SELECT concat(table_schema,'.',table_name) as Database_Tablename, table_rows as Rows, concat(round(data_length/(1024*1024),2),'M') DATA, concat(round(index_length/(1024*1024),2),'M') idx, concat(round((data_length+index_length)/(1024*1024),2),'M') total_size, round(index_length/data_length,2) idxfrac FROM information_schema.TABLES where ENGINE='MEMORY';

It will show 5 top tables using space. 


SELECT concat(table_schema,'.',table_name) as Database_Tablename, table_rows as Rows, concat(round(data_length/(1024*1024),2),'M') DATA, concat(round(index_length/(1024*1024),2),'M') idx, concat(round((data_length+index_length)/(1024*1024),2),'M') total_size, round(index_length/data_length,2) idxfrac FROM information_schema.TABLES where ORDER BY data_length+index_length DESC limit 5;

MYSQL : Checking and Reparing tables



MYSQL : Checking and Repairing tables

If your database is MySQL, though, there's still hope. 
Over the course of this article, I will show you how to use MySQL's built-in crash recovery tools to check your database, and hopefully recover all the data you just lost. 
Built-in tools When it comes to repairing and checking tables, MySQL offers two options:

myisamchk designed specifically to check and repair MyISAM tables (the default table type used by MySQL). This tool can scan your databases, identify problems, optimize tables for faster lookup, and optionally repair corrupted tables. The myisamchk tool is invoked from the command line. MySQL also allows you check and repair tables using SQL commands. The CHECK TABLE, REPAIR TABLE, and OPTIMIZE TABLE commands can be used on a running MySQL server and need to be entered through a MySQL client. 

Most of the time, it's preferable to use myisamchk because it is significantly faster than using SQL commands. Checking tables If you're having trouble accessing a table, first try checking it for errors. To check a table, shut down the server and type myisamchk at the command prompt, followed by the table's file name, as shown below: 
$ myisamchk /usr/local/mysql/data/phpbb/user.myi
Checking MyISAM file:
 /usr/local/mysql/data//user.
MYI Data records: 1153
Deleted blocks: 0 - check file-size - check key delete-chain - check record delete-chain - check index reference - check data record references index: 1 - check record links
Use the complete path to the table file instead of just the table name. Remember to include the file extension as well. myisamchk options :
myisamchk -- fast Perform a quick check, only verifying if the table handler closed successfully myisamchk -- medium-check Perform a faster check, by calculating a checksum for the indexes in each record and verifying that checksum against the index tree myisamchk -- extend-check Perform a thorough check of the table, verifying the data in each record Command line arguments for myisamchk Repairing tables
Next, shut down the MySQL server.
This is because myisamchk makes changes to the table file itself, so you don't want other users accessing or otherwise manipulating the file during the repair operation.
Also advisable table physical backup of database.
Once the table file is backed up and the server down, you can run myisamchk with the --recover option, as shown below:
$ myisamchk --recover /usr/local/mysql/data/phpbb/user.myi
- recovering (with sort) MyISAM-
table '/usr/local/mysql/data/phpbb/user.MYI'
Data records: 1153 - Fixing index 1 The --recover option reconstructs the MySQL table index file after weeding the corrupted table of invalid or previously deleted records that could be causing a problem.
In the unlikely event that the --recover option fails, revert to the original table file and try the --safe-recover option.
This is slower, because MySQL needs to scan through the records in the data file one by one and them restore the index, but it can sometimes work better than a regular repair operation.
$ myisamchk --safe-recover /usr/local/mysql/data/phpbb/user.myi
 - recovering (with keycache) MyISAM-table '/usr/local/mysql/data/phpbb/user.MYI'
Data records: 4
Using CHECK TABLE and REPAIR TABLE check and repair operations are run from a MySQL prompt,
via the CHECK TABLE and REPAIR TABLE commands.
Consider the following example of running a CHECK TABLE
command: mysql
CHECK TABLE user;
 +--------------+-------+----------+----------+
| Table | Op | Msg_type | Msg_text | +--------------+-------+----------+----------+
 | phpbb.user | check | status | OK | +--------------+-------+----------+----------+
1 row in set (0.02 sec)
You can add the FAST, MEDIUM, and EXTENDED keywords to the command to obtain behavior similar to that available in myisamchk.
For example:
Mysql
CHECK TABLE user EXTENDED;
 +--------------+-------+----------+----------+
| Table | Op | Msg_type | Msg_text
 | +--------------+-------+----------+----------+
| phpbb.user| check | status | OK
 | +--------------+-------+----------+----------+
1 row in set (0.02 sec) When it comes to repairing tables, too, MySQL offers the REPAIR TABLE option, as below:
Mysql
REPAIR TABLE user;
+--------------+--------+----------+----------+
| Table | Op | Msg_type | Msg_text
| +--------------+--------+----------+----------+
| phpbb.user | repair | status | OK
 | +--------------+--------+----------+----------+
1 row in set (0.01 sec) As with the myisamchk tool, the REPAIR TABLE command can take one of two additional options: QUICK, which tries a quick repair, and EXTENDED, which rebuilds the index after reading each record in the table.
An option here is to use the mysqlcheck utility, which provides a command line front end to the CHECK TABLE and REPAIR TABLE commands, allowing you to check and repair tables without first taking the server down. Everything that you can do with CHECK TABLE and REPAIR TABLE can be done with mysqlcheck, by passing appropriate command line options to the program.

MySQL : Allocating memory for caches


 MySQL : Allocating memory for caches


Allocating memory for caches in MySQL

 If the server is dedicated to MySQL, any memory you don’t reserve for the operating system or for query processing is available for caches.
MySQL needs more memory for caches than anything else. It uses caches to avoid disk access, which are orders of magnitude slower than accessing data in memory.
The operating system may cache some data on MySQL’s behalf (especially for MyISAM), but MySQL needs lots of memory for itself too.
 The following are the most important caches to consider for the majority of installations:
• The operating system caches for MyISAM data
• MyISAM key caches
• The InnoDB buffer pool
• The query cache there is other caches, but they generally don’t use much memory.
It is much easier to tune a server if you’re using only one storage engine.
 If you’re using only MyISAM tables, you can disable InnoDB completely, and if you’re using only InnoDB, you need to allocate only minimal resources for MyISAM (MySQL uses MyISAM tables internally for some operations). But if you’re using a mixture of storage engines, it can be very hard to figure out the right balance between them.

The MyISAM Key Cache

 The MyISAM key caches are also referred to as key buffers; there is one by default, but you can create more. Unlike InnoDB and some other storage engines, MyISAM itself caches only indexes, not data (it lets the operating system cache the data).
If you use mostly MyISAM, you should allocate a lot of memory to the key caches. The most important option is the key_buffer_size, which you should try setting to between 25% and 50% of the amount of memory you reserved for caches.
The remainder will be available for the operating system caches, which the operating system will usually fill with data from MyISAM’s.

MYD files

MySQL 5.0 has a hard upper limit of 4 GB for this variable, no matter what architecture you’re running. (MySQL 5.1 allows larger sizes. Check the current documentation for your version of the server.) By default MyISAM caches all indexes in the default key buffer, but you can create multiple named key buffers. This lets you keep more than 4 GB of indexes in memory at once. To create key buffers named key_buffer_1 and key_buffer_2, each sized at 1 GB, place the following in the configuration file:

key_buffer_1

key_buffer_size = 1G key_buffer_2.key_buffer_size = 1G Now there are three key buffers: the two explicitly created by those lines and the default buffer. You can use the CACHE INDEX command to map tables to caches. You can also tell MySQL to use key_buffer_1 for the indexes from tables t1 and t2 with the following SQL statement:
>mysql
>CACHE INDEX t1, t2 IN key_buffer_1;
Now when MySQL reads blocks from the indexes on these tables, it will cache the blocks in the specified buffer. You can also preload the tables’ indexes into the cache with the LOAD INDEX command:
>mysql
>LOAD INDEX INTO CACHE t1, t2;
You can place this SQL into a file that’s executed when MySQL starts up. The filename must be specified in the init_file option, and the file can include multiple SQL commands, each on a single line (no comments are allowed).
Any indexes you don’t explicitly map to a key buffer will be assigned to the default buffer the first time MySQL needs to access the .

MYI file

You can monitor the performance and usage of the key buffers with information from SHOW STATUS and SHOW VARIABLES. You can calculate the hit ratio and the percentage of the buffer in use with these equations:
 Cache hit ratio 100 - ( (Key_reads * 100) / Key_read_requests )
Percentage of buffer in use
100 - ( (Key_blocks_unused * key_cache_block_size) * 100 / key_buffer_size )
It’s good to know the cache hit rate, but this number can be misleading.
For example, the difference between 99% and 99.9% looks small, but it really represents a tenfold increase. The cache hit rate is also application-dependent: some applications might work fine at 95%, whereas others might be I/O-bound at 99.9%.
You might even be able to get a 99.99% hit rate with properly sized caches. The number of cache misses per second is generally much more empirically useful.
Suppose you have a single hard drive that can do 100 random reads per second. Five misses per second will not cause your workload to be I/O-bound, but 80 per second will likely cause problems.
You can use the following equation to calculate this value cache misses:
Key_reads / Uptime Calculate the number of misses incrementally over intervals of 10 to 100 seconds, so you can get an idea of the current performance.
The following command will show the incremental values every 10 seconds:
$ mysqladmin extended-status -r -i 10 | grep Key_reads
When you’re deciding how much memory to allocate to the key caches, it might help to know how much space your MyISAM indexes are actually using on disk.
You don’t need to make the key buffers larger than the data they will cache. Space your MyISAM indexes are actually using on disk $ du -sch `find /path/to/mysql/data/directory/ -name "*.MYI"`
Remember that MyISAM uses the operating system cache for the data files, which are often larger than the indexes.
Therefore, it often makes sense to leave more memory for the operating system cache than for the key caches. Finally, even if you don’t have any MyISAM tables, bear in mind that you still need to set key_buffer_size to a small amount of memory, such as 32M.
The MySQL server sometimes uses MyISAM tables for internal purposes, such as temporary tables for GROUP BY queries. The MyISAM key block size The key block size is important (especially for write-intensive workloads) because of the way it causes MyISAM, the operating system cache, and the filesystem to interact. If the key block size is too small, you may encounter read-around writes, which are writes that the operating system cannot perform without first reading some data from the disk. Here’s how a read-around write happens, assuming the operating system’s page size is 4 KB (typically true on the x86 architecture) and the key block size is 1 KB:
1. MyISAM requests a 1 KB key block from disk.
2. The operating system reads 4 KB of data from the disk and caches it, then passes the desired 1 KB of data to MyISAM.
3. The operating system discards the cached data in favor of some other data.
4. MyISAM modifies the 1 KB key block and asks the operating system to write it back to disk.
5. The operating system reads the same 4 KB of data from the disk into the operating system cache, modifies the 1 KB that MyISAM changed, and writes the entire 4 KB back to disk.
The read-around write happened in step 5, when MyISAM asked the operating system to write only part of a 4 KB page. If MyISAM’s block size had matched the operating system’s, the disk read in step 5 could have been avoided.
Unfortunately, in MySQL 5.0 and earlier, there’s no way to configure the key block size.
However, in MySQL 5.1 and later, you can avoid read-around writes by making MyISAM’s key block size the same as the operating system’s.
The myisam_block_size variable controls the key block size. You can also specify the size for each key with the KEY_BLOCK_SIZE option in a
 CREATE TABLE or CREATE INDEX statement,
but because all keys are stored in the same file, you really need all of them to have blocks as large as or larger than the operating system’s to avoid alignment issues that could still cause read-around writes. (For example, if one key has 1 KB blocks and another has 4 KB blocks, the 4 KB block boundaries might not match the operating system’s page boundaries.)
The InnoDB Buffer Pool If you use mostly InnoDB tables, the InnoDB buffer pool probably needs more memory than anything else.
Unlike the MyISAM key cache, the InnoDB buffer pool doesn’t just cache indexes:
 it also holds row data, the adaptive hash index, the insert buffer, locks, and other internal structures.
InnoDB also uses the buffer pool to help it delay writes, so it can merge many writes together and perform them sequentially. In short, InnoDB relies heavily on the buffer pool, and you should be sure to allocate enough memory to it.
The MySQL manual suggests using up to 80% of the machine’s physical memory for the buffer pool on a dedicated server; in reality, you can use more than that if the machine has a lot of memory.
As with the MyISAM key buffers, you can use variables from SHOW commands or tools such as innotop to monitor your InnoDB buffer pool’s memory usage and performance.
There’s no equivalent of LOAD INDEX INTO CACHE for InnoDB tables.
However, if you’re trying to warm up a server and get it ready to handle a heavy load, you can issue queries that perform full table scans or full index scans. In most cases, you should make the InnoDB buffer pool as large as your available memory allows. However, in rare circumstances, very large buffer pools (say, 50 GB) can cause long stalls.
For example, a large buffer pool may become slow during checkpoints or insert buffer merge operations, and concurrency can drop as a result of locking.
If you experience these problems, you may have to reduce the buffer pool size. You can change the innodb_max_dirty_pages_pct variable to instruct InnoDB to keep more or fewer dirty (modified) pages in the buffer pool.
If you allow a lot of dirty pages, InnoDB can take a long time to shut down, because it writes the dirty pages to the data files upon shutdown. You can monitor the number of dirty pages by watching the Innodb_buffer_pool_pages_dirty server status variable or using innotop to monitor SHOW INNODB STATUS.
 Lowering the value of the innodb_max_dirty_pages_pct variable doesn’t actually guarantee that InnoDB will keep fewer dirty pages in the buffer pool. Instead, it controls the threshold at which InnoDB stops being “lazy.” InnoDB’s default behavior is to flush dirty pages with a background thread, merging writes together and performing them sequentially for efficiency. This behavior is called “lazy” because it lets InnoDB delay flushing dirty pages in the buffer pool, unless it needs to use the space for some other data. When the percentage of dirty pages exceeds the threshold, InnoDB will flush pages as quickly as it can to try to keep the dirty page count lower.
The variable’s default value is 90, so by default InnoDB will flush lazily until the buffer pool is 90% full of dirty pages. You can tweak the threshold for your workload if you wish to spread out the writes a bit more. For example, lowering it to 50 will generally cause InnoDB to do more write operations, because it will flush pages sooner and therefore be unable to batch the writes as well. However, if your workload has a lot of write spikes, using a lower value may help InnoDB absorb the spikes better: it will have more “spare” memory to hold dirty pages, so it won’t have to wait for other dirty pages to be flushed to disk.
The Thread Cache The thread cache holds threads that aren’t currently associated with a connection but are ready to serve new connections. When there’s a thread in the cache and a new connection is created, MySQL removes the thread from the cache and gives it to the new connection. When the connection is closed, MySQL places the thread back into the cache, if there’s room. If isn’t room, MySQL destroys the thread. As long as MySQL has a free thread in the cache, it can respond very rapidly to connect requests, because it doesn’t have to create a new thread for each connection. The thread_cache_size variable specifies the number of threads MySQL can keep in the cache. You probably won’t need to tune this value, unless your server gets many connection requests.
To check whether the thread cache is large enough, watch the Threads_created status variable. We generally try to keep the thread cache large enough that we see fewer than 10 new threads created each second, but it’s often pretty easy to get this number lower than 1 per second. A good approach is to watch the
Threads_connected variable and try to set thread_cache_size large enough to handle the typical fluctuation in your workload. For example, if Threads_connected usually stays between 100 and 200, you can set the cache size to 100. If it stays between 500 and 700, a thread cache of 200 should be large enough. Think of it this way: at 700 connections, there are probably no threads in the cache; at 500 connections, there are 200 cached threads ready to be used if the load increases to 700 again. Making the thread cache very large is probably not necessary for most uses, but keeping it small doesn’t save much memory, so there’s little benefit in doing so. Each thread that’s in the thread cache or sleeping typically uses around 256 KB of memory.
This is very little compared to the amount of memory a thread can use when a connection is actively processing a query. In general, you should keep your thread cache large enough that Threads_created doesn’t increase very often. If this is a very large number, however (e.g., many thousand threads), you might want to set it lower because some operating systems don’t handle very large numbers of threads well, even when most of them are sleeping.
The Table Cache The table cache is similar in concept to the thread cache, but it stores objects that represent tables. Each object in the cache contains the associated table’s parsed .frm file, plus other data. Exactly what else is in the object depends on the table’s storage engine. For example, for MyISAM, it holds the table data and/or index file descriptors. For merge tables it may hold many file descriptors, because merge tables can have many underlying tables. The table cache can help you reuse resources.
For instance, when a query requests access to a MyISAM table, MySQL might be able to give it a file descriptor from the cached object instead of opening the file.
The table cache can also help avoid some of the I/O required for marking a MyISAM table as “in use” in the index headers.*The table cache’s design is a little MyISAM-centric—this is one of the areas where the separation between the server and the storage engines is not completely clean, for historical reasons. The table cache is a little less important for InnoDB, because InnoDB doesn’t rely on it for as many purposes (such as holding file descriptors; it has its own version of a table cache for this purpose). However, even InnoDB benefits from caching the parsed .frm files.
In MySQL 5.1, the table cache is separated into two parts: a cache of open tables and a table definition cache (configured via the table_open_cache and table_definition_cache variables). Thus, the table definitions (the parsed .frm files) are separated from the other resources, such as file descriptors. Opened tables are still per-thread, pertable-used, but the table definitions are global and can be shared among all connections efficiently. You can generally set table_definition_cache high enough to cache all your table definitions. Unless you have tens of thousands of tables, this is likely to be the easiest approach. If the Opened_tables status variable is large or increasing, the table cache isn’t large enough, and you should increase the table_cache system variable (or table_open_cache, in MySQL 5.1).
The only real downside to making the table cache very large is that it might cause longer shutdown times when your server has a lot of MyISAM tables, because the key blocks have to be flushed and the tables have to be marked as no longer open.
It can also make FLUSH TABLES WITH READ LOCK take a long time complete, for the same reason. If you get errors indicating that MySQL can’t open any more files (use the perror utility to check what the error number means), you might also need to increase the number of files MySQL is allowed to keep open. You can do this with the open_files_limit server variable in your my.cnf file.
The thread and table caches don’t really use much memory, and they are beneficial because they conserve resources. Although creating a new thread and opening a new file aren’t really expensive compared to other things MySQL might do, the overhead can add up quickly under a high-concurrency workload. Caching threads and tables can improve efficiency. 

  

MySQL Memory Allocation


Allocating RAM for MySQL


The Short Answer If using just MyISAM, set key_buffer_size to 20% of _available_ RAM. (Plus innodb_buffer_pool_size=0)
If using just InnoDB, set innodb_buffer_pool_size to 70% of _available_ RAM. (Plus key_buffer_size = 10M, small, but not zero.)

Rule of thumb for tuning mysql:
·         Start with released copy of my.cnf / my.ini.
·         Change key_buffer_size and innodb_buffer_pool_size according to engine usage and RAM.
·         Slow queries can usually be 'fixed' via indexes, schema changes, or SELECT changes, not by tuning.
·         Don't get carried away with the Query cache until you understand what it can and cannot do.
·         Don't change anything else unless you run into trouble (eg, max connections).
·         Be sure the changes are under the [mysqld] section, not some other section. Now for the gory details. (NDB Cluster is not discussed here.) What is the key_buffer? MyISAM does two different things for caching.
·         Index blocks (1KB each, BTree structured, from .MYI file) live in the "key buffer".
·         Data block caching (from .MYD file) is left to the OS, so be sure to leave a bunch of free space for this. 

Caveat: Some flavors of OS always claim to be using over 90%, even when there is really lots of free space.
SHOW GLOBAL STATUS LIKE 'Key%'; then calculate Key_read_requests / Key_reads If it is high (say, over 10), then the key_buffer is big enough.

What is the buffer_pool?

InnoDB does all its caching in a the "buffer pool", whose size is controlled by innodb_buffer_pool_size. It contains 16KB data and index blocks from the open tables, plus some maintenance overhead.
MySQL 5.5 (and 5.1 with the "Plugin") lets you declare the block size to be 8KB or 4KB. MySQL 5.5 allows multiple buffer pools; this can help because there is one mutex per pool, thereby relieving some of the Mutex bottleneck.
Mutex bottleneck MySQL was designed in the days of single-CPU machines, and designed to be easily ported to many different architectures. Unfortunately, that lead to some sloppiness in how to interlock actions. There are small number (too small) of "mutexes" to gain access to several critical processes.

Of note:

·         MyISAM's key_buffer The Query Cache
·         InnoDB's buffer_pool With multi-core boxes, the mutex problem is causing performance problems.
In general, past 4-8 cores, MySQL gets slower, not faster. MySQL 5.5 and Percona's XtraDB are making that somewhat better in InnoDB; the practical limit for cores is more like 32, and performance tends plateaus after that rather than declining. 5.6 claims to scale up to about 48 cores.
HyperThreading and Multiple cores (CPUs) Short answers:
·         Turn off HyperThreading
·         Turn off any cores beyond 8
·         HyperThreading is mostly a thing of the past, so this section may not apply. HyperThreading is great for marketing, lousy for performance. It involves having two processing units sharing a single hardware cache. If both units are doing the same thing, the cache will be reasonably useful. If the units are doing different things, they will be clobbering each other's cache entries.
Furthermore MySQL is not great on using multiple cores. So, if you turn off HT, the remaining cores run a little faster. 32-bit OS and MySQL First, the OS (and the hardware?) may conspire to not let you use all 4GB, if that is what you have.
If you have more than 4GB of RAM, the excess beyond 4GB is _totally_ inaccessable and unusable on a 32-bit OS.
Secondly, the OS probably has a limit on how much RAM it will allow any process to use. Example: FreeBSD's maxdsiz, which defaults to 512MB. Example: $ ulimit -a ... max memory size (kbytes, -m) 524288 .
So, once you have determined how much RAM is available to mysqld, then apply the 20%/70%, but round down some. If you get an error like [ERROR] /usr/libexec/mysqld: Out of memory (Needed xxx bytes), it probably means that MySQL exceeded what the OS is willing to give it.

Decrease the cache settings.

64-bit OS with 32-bit MySQL

The OS is not limited by 4GB, but MySQL is. If you have at least 4GB of RAM, then maybe these would be good:
·         key_buffer_size = 20% of _all_ of RAM, but not more than 3G
·         buffer_pool = 3G You should probably upgrade MySQL to 64-bit. 64-bit OS and MySQL MyISAM only: key_buffer_size (before 5.0.52 / 5.1.23) had a hard limit of 4G.
See also 5.1 restrictions Otherwise, use about 20% of RAM. Set (in my.cnf / my.ini) innodb_buffer_pool_size = 0.

InnoDB only: 


innodb_buffer_pool_size = 70% of RAM. If you have lots of RAM and are using 5.5, then consider having multiple pools. Recommend 1-16 instances, such that each one is no smaller than 1GB. (Sorry, no metric on how much this will help; probably not a lot.)
Meanwhile, set key_buffer_size = 20M (tiny, but non-zero) If you have a mixture of engines, lower both numbers. max_connections, thread_stack Each "thread" takes some amount of RAM.
This used to be about 200KB; 100 threads would be 20MB, not a signifcant size. If you have max_connections = 1000, then you are talking about 200MB, maybe more. Having that many connections probably implies other issues that should be addressed. Thread stack overrun rarely happens. If it does, do something like thread_stack=256K table_cache (table_open_cache) (The name changed in some version.) The OS has some limit on the number of open files it will let a process have.
Each table needs 1 to 3 open files. Each PARTITION is effectively a table. Most operations on a partitioned table open _all_ partitions. In *nix, ulimit tells you what the file limit is. The maximum value is in the tens of thousands, but sometimes it is set to only 1024. This limits you to about 300 tables. More discussion on ulimit (This paragraph is in disputed.) On the other side, the table cache is (was) inefficiently implemented -- lookups were done with a linear scan.

Hence, setting table_cache in the thousands could actually slow down mysql. (Benchmarks have shown this.) You can see how well your system is performing via SHOW GLOBAL STATUS; and computing the opens/second via Opened_files / Uptime If this is more than, say, 5, table_cache should be increased. If it is less than, say, 1, you might get improvement by decreasing table_cache. Query Cache Short answer: query_cache_type = OFF and query_cache_size = 0 The QC is effectively a hash mapping SELECT statements to resultsets.

Long answer...
 
There are many aspects of the "Query cache"; many are negative.
·         Novice Alert! The QC is totally unrelated to the key_buffer and buffer_pool.
·         When it is useful, the QC is blazingly fast. It would not be hard to create a benchmark that runs 1000x faster.
·         There is a single mutex controlling the QC.
·         The QC, unless it is OFF
0, is consulted for _every_ SELECT.
·         Yes, the mutex is hit even if query_cache_type = DEMAND (2).
·         Yes, the mutex is hit even for SQL_NO_CACHE.
·         Any change to a query (even adding a space) leads (potentially) to a different entry in the QC. "Pruning" is costly and frequent:
·         When _any_ write happens on a table, _all_ entries in the QC for _that_ table are removed.
·         It happens even on a readonly Slave.
·         Purges are performed with a linear algorithm, so a large QC (even 200MB) can be noticeably slow. To see how well your QC is performing, SHOW GLOBAL STATUS LIKE 'Qc%'; then compute the read hit rate: Qcache_hits / Qcache_inserts If it is over, say, 5, the QC might be worth keeping. If you decide the QC is right for you, then I recommend
·         query_cache_size = no more than 50M
·         query_cache_type = DEMAND
·         SQL_CACHE or SQL_NO_CACHE in all SELECTs, based on which queries are likely to benefit from caching. thread_cache_size This is a minor tunable. Zero will slow down thread (connection) creation.
A small (say, 10), non-zero number is good. The setting has essentially no impact on RAM usage. It is the number of extra processes to hang onto. It does not restrict the number of threads; max_connections does. swappiness RHEL, in its infinite wisdom, decided to let you control how aggressively the OS will preemptively swap RAM.
This is good in general, but lousy for MySQL. MySQL would love for RAM allocations to be reasonably stable -- the caches are (mostly) pre-allocated; the threads, etc, are (mostly) of limited scope. ANY swapping is likely to severly hurt performance of MySQL. With a high value for swappiness, you lose some RAM because the OS is trying to keep a lot of space free for future allocations (that MySQL is not likely to need). With swappiness = 0, the OS will probably crash rather than swap. I would rather have MySQL limping than die.
Somewhere in between (say, 5?) might be a good value for a MySQL-only server. NUMA OK, it's time to complicate the architecture of how a CPU talks to RAM. NUMA (Non-Uniform Memory Access) enters the picture. Each CPU (or maybe socket with several cores) has a part of the RAM hanging off each. This leads to memory access being faster for local RAM, but slower (tens of cycles slower) for RAM hanging off other CPUs. Then the OS enters the picture. In at least one case (RHEL?), two things seem to be done:
·         OS allocations are pinned to the 'first' CPU's RAM.]
·         Other allocations go by default to the first CPU until it is full. Now for the problem.
·         The OS and MySQL have allocated all the 'first' RAM.
·         MySQL has allocated some of the second RAM.
·         The OS needs to allocate something. Ouch -- it is out of room in the one CPU where it is willing to allocate its stuff, so it swaps out some of MySQL. Bad.

Possible solution:

 Configure the BIOS to "interleave" the RAM allocations. This should prevent the premature swapping, at the cost of off-CPU RAM accesses half the time. Well, you have the costly accesses anyway, since you really want to use all of RAM.
Overall performance loss/gain:
A few percent. huge pages This is another hardware performance gimmick.
For a CPU to access RAM, especially mapping a 64-bit address to somewhere in, say, 128GB or 'real' RAM, the TLB is used. (TLB = Translation Lookup Buffer.) Think of the TLB as a hardware associative memory lookup table; given a 64-bit virtual address, what is the real address.
Because it is an associative memory of finite size, sometimes there will be "misses" that require reaching into real RAM to resolve the lookup. This is costly, so should be avoided. Normally, RAM is 'paged' in 4KB pieces; the TLB actually maps the top (64-12) bits into a specific page. Then the bottom 12 bits of the virtual address are carried over intact. For example, 128GB of RAM broken 4KB pages means 32M page-table entries. This is a lot, and probably far exceeds the capacity of the TLB.
So, enter the "Huge page" trick. With the help of both the hardware and the OS, it is possible to have some of RAM in huge pages, of say 4MB (instead of 4KB). This leads to far fewer TLB entries, but it means the unit of paging is 4MB for such parts of RAM. Hence, huge pages tend to be non-pagable.
Now RAM is broken into pagable and non pagable parts; what parts can reasonably be non pagable? In MySQL, the innodb_buffer_pool is a perfect candidate.
So, by correctly configuring these, InnoDB can run a little faster: Huge pages enabled Tell the OS to allocate the right amount (namely to match the buffer_pool) Tell MySQL to use huge pages innodb memory usage vs swap That thread has more details on what to look for and what to set. Overall performance gain: A few percent. Yawn. ENGINE=MEMORY
This is a little-used alternative to MyISAM and InnoDB. The data is not persistent, so it has limited uses. The size of a MEMORY table is limited to max_heap_table_size, which defaults to 16MB. I mention it in case you have changed the value to something huge; this would stealing from other possible uses of RAM.

Web server

A web server like Apache runs multiple threads. If each thread opens a connection to MySQL, you could run out of connections. Make sure MaxClients (or equivalent) is set to some civilized number (under 50).

 Tools

·         MySQLTuner
·         TUNING-PRIMER There are several tools that advise on memory. One misleading entry they come up with Maximum possible memory usage: 31.3G (266% of installed RAM) Don't let it scare you -- the formulas used are excessively conservative.
They assume all of max_connections are in use and active, and doing something memory-intensive.
Total fragmented tables: 23 This implies that OPTIMIZE TABLE _might_ help. I suggest it for tables with either a high percentage of "free space" (see SHOW TABLE STATUS) or where you know you do a lot of DELETEs and/or UPDATEs. Still, don't bother to OPTIMIZE too often. Once a month might suffice.