Thứ Năm, 14 tháng 4, 2011

Quick Dell OpenManage Email Alerts

Dell OpenManage does not provide a simple way to set a catch-all email alert for platform and storage events. With these few steps, we can configure OpenManage to send an email on any alert.

First, we must create a simple shell script to send the email.

/usr/local/bin/om-alert.sh

#!/bin/sh
HOST=`hostname`
EMAIL="my_admin@my_network.net"
echo "There has been an OpenManage ALERT detected on $HOST. Please login to the web interface to see details." | mail $EMAIL -s "OM ALERT $HOST"



We can set individual alerts:

# chmod +x /usr/local/bin/om-alert.sh
# omconfig system alertaction
# omconfig system alertaction -?
# omconfig system alertaction event=powersupply execappath=/usr/local/bin/om-alert.sh
# omconfig system alertaction event=storagesyswarn alert=true broadcast=true execappath=/usr/local/bin/om-alert.sh
# omreport system alertaction



Or we can set console, broadcast and email for all alerts.

# for I in `omconfig system alertaction | sed 's/ *(.*)//; s/>.*//; s/.*[:<] *// ; s/|/ /g;'`; do
echo $I;
omconfig system alertaction event=$I alert=true broadcast=true execappath="/usr/local/bin/om-alert.sh"
done

Chủ Nhật, 24 tháng 10, 2010

Ten Ways To Easily Improve Oracle Solaris ZFS Filesystem Performance

This is a long article, but I hope you'll still find it interesting to read. Let me know if you want me to break down future long articles into multiple parts instead.

One of the most frequently asked questions around ZFS is: "How can I improve ZFS performance?".

This is not to say that ZFS performance would be bad. ZFS can be a very fast file system. ZFS is mostly self-tuning and the inherent nature of the algorithms behind ZFS help you reach better performance than most RAID-controllers and RAID-boxes - but without the expensive "controller" part.

Most of the ZFS performance problems that I see are rooted in incorrect assumptions about the hardware, or just unrealistic expectations of the laws of physics.

So let's look at ten ways to easily improve ZFS performance that everyone can implement without being a ZFS expert.

For ease of reading, here's a table of contents:

* The Basics of File System Performance
* Performance Expectations, Goals and Strategy
* #1: Add Enough RAM
* #2: Add More RAM
* #3: Boost Deduplication Performance With Even More RAM
* #4: Use SSDs to Improve Read Performance
* #5: Use SSDs to Improve Write Performance
* #6: Use Mirroring
* #7: Add More Disks
* #8: Leave Enough Free Space
* #9: Hire An Expert
* #10: Be An Evil Tuner - But Know What You Do
* Bonus: Some Miscellaneous Settings
* Your Turn
* Related Posts

But before we start with our performance tips, let's cover the basics:
The Basics of File System Performance

It's important to distinguish between the two basic types of file system operation:

* Reads, and
* Writes.

This may sound stupidly simple, but data travels very different paths through the ZFS I/O subsystem for reads vs. writes and this means there are different ways to improve read performance than there are to make writes faster.

Use zpool iostat or iostat(1M) and verify what read/write performance the system sees and whether it matches your observations and expectations.

Then there are two kinds of file system performance:

* Bandwidth: Measured in MB/s (or GB/s if you're lucky), telling you how much overall data passes through the system over time.
* IOPS: The number of IO operations that are carried out per second.

Again, these different ways of looking at performance can be optimized by different means, you just need to know into which category your particular problem falls.

There are also two patterns of read/write performance:

* Sequential: Predictable, one block after the other, lined up like pearls on a string.
* Random: Unpredictable, unordered, difficult to grasp.

The good news here is that ZFS automatically turns random writes into sequential writes through the magic of copy-on-write. One less class of performance problems to take care of.

And finally, for write I/Os, you should know about the difference between:

* Synchronous Writes: Writes that are only complete after they have been successfully written to stable storage. In ZFS, they're implemented through the ZFS Intent Log, or ZIL. These are most often found in file and database servers and these kinds of writes are very sensitive to latency or IOPS performance.
* Asynchronous Writes: Write operations that may return after being cached in RAM, before they are committed to disk. Performance is easy to get here, at the expense of reliability: If the power fails before the buffer is written to disk, data can be lost.

Performance Expectations, Goals and Strategy

We're almost there, but before we get to the actual performance tips, we need to discuss a few methodical things:

* Set realistic expectations: ZFS is great, yes. But you need to observe the laws of physics. A disk with 10000 rpm can't deliver more than 166 random IOPS because 10000 revolutions divided by 60 seconds (per minute) means the head can only position itself above a random block 166 times per second. Any more than that and your data is not really random. That's just how the numbers play out.
Similarly, RAID-Z means that you'll only get the IOPS performance of a single disk per RAID-Z group, because each filesystem IO will be mapped to all the disks in a RAID-Z group in parallel.
Make sure you know what the limits of your storage are and what performance you can realistically expect, when analyzing your performance and setting performance goals. By the way...
* Define performance goals: What exactly is "too slow"? What performance would be acceptable? Where are you now, and where do you want to be?
Performance goals are important to set, because they tell you when you're done. There are always ways to improve performance, but there's no use in improving performance at all costs. Know when you're done, then celebrate!
* Be systematic: It happens so many times: We try this, then we try that, we measure with cp(1) even though our app is actually a database, then we tweak here and there, and before we know it, we realize: We really know nothing.
Being systematic means defining how to measure the performance we want, establishing the status quo, in a way that is directly related to the actual application we're interested in, then sticking to the same performance measurement method through the whole performance analysis and optimization process.
Otherwise, things become confusing, we lose sight of where we are and we won't be able to tell if we reached our goal or not.

Now that we have an understanding of the kind of performance we want, we know what we can expect from today's hardware, we defined some realistic goals and have a systematic approach to performance optimization, let's begin:
#1: Add Enough RAM

A small amount of data on your disks is spent for storing ZFS metadata. This is the data that ZFS needs, so it knows where your actual data is. In a way, this is the roadmap that ZFS needs to find its way through your disks and the data structures there.

If your server doesn't have enough RAM to store metadata, then it will need to issue extra metadata read IOs for every data read IO to figure out where your data actually is on disk. This is slower than necessary, and you really want to avoid that. If you're really short on RAM, this could have a massive impact!

How much RAM do you need? As a rough rule of thumb, divide the size of your total storage by 1000, then add 1 GB so the OS has some extra RAM of its own to breathe. This means for every TB of data, you'll want at least 1GB of RAM for caching ZFS metadata, in addition to one GB for the OS to feel comfortable in.

Having enough RAM will benefit all of your reads, no matter if they're random or sequential, just because they'll be easier for ZFS to find on your disks, so make sure you have at least n/1000 + 1 GB of RAM, where n is the number of GB in your storage pool.
#2: Add More RAM

ZFS uses every piece of RAM it finds to cache data. It has a very sophisticated caching algorithm that tries to cache both most frequently used data, and most recently used data, adapting their balance while it's used. ZFS also has some advanced prefetching abilities that can greatly improve performance for different kinds of sequential reads.

All of this works the better the more RAM you give to ZFS. But when do you know if more RAM will give you breakthrough performance, or just a small improvement?

This is where your working set comes in.

Your working set is the part of your data that is used most often: Your top products/websites/customers in an e-commerce database. Your clients with the biggest traffic in your hosting environment. Your most popular files etc.

If your working set fits into RAM, the utter majority of reads can be serviced from RAM most of the time, without having to create any IOs to slow-spinning disks.

Try to figure out what the most popular subset of your data is, then add enough RAM to your ZFS server to help it live there. This will give you the biggest read performance boost.

If you want something more automated, Ben Rockwood has written a great tool called arc_summary (ARC is the ZFS Adaptive Replacement Cache). The two "Ghost" values tell you exactly how much more memory would have helped you to handle the load that your server has seen in the past.

If you want to influence the balance between user data and metadata in the ZFS ARC cache, check out the primarycache filesystem property that you can set using the zfs(1M) command. For RAM-starved servers with a lot of random reads, it may make sense to restrict the precious RAM cache to metadata and use an L2ARC, explained in tip #4 below.
#3: Boost Deduplication Performance With Even More RAM

In an earlier article, I wrote about the basics of ZFS Deduplication. If you plan to use it, keep in mind that ZFS will assemble a table of all the blocks stored in your filesystem and their checksums, so it can determine whether a specific block has been already written and can thus safely marked as a duplicate.

Deduplication will save you space and it can also add to your performance because it saves you unnecessary read and write IOPS. But the cost of this is the need to keep the dedup table as handy as possible, ideally in RAM.

How big is the ZFS dedup table? Richard Elling pointed out in a recent mailinglist post that a ZFS dedup table entry uses about 250 Bytes per data block. Assuming an average block size of 8K, a TB of user data would need about 32GB of RAM if you want to be real fast. If your data tends to be spread over large files, you'll have a bigger average blocksize, say, 64K, and then you'd only need about 4GB of RAM for the dedup table.

If you don't have that amount of RAM, there's no need to despair, there's always the possibility to...
#4: Use SSDs to Improve Read Performance

If you can't add any more RAM to your server (or if your purchasing department won't allow you), the next best way to increase read performance is to add solid state disks (aka flash memory) as a level 2 ARC cache (L2ARC) to your system.

You can easily configure them with the zpool(1M) command, read the "Cache devices" section of its man-page.

SSDs can deliver two orders of magnitude better IOPS than traditional harddisks, and they're much cheaper on a per-GB basis than RAM.
They form an excellent layer of cache between the ZFS RAM-based ARC and the actual disk storage.

You don't need to observe any reliability requirements when configuring L2ARC devices: If they fail, no data is lost because it can always be retrieved from disk.

This means that L2ARC devices can be cheap, but before you start putting USB sticks into your server, you should make sure they deliver a good performance benefit over your rotating disks :).

SSDs come in various sizes: From drop-in-replacements for existing SATA disks in the range of 32GB to the Oracle Sun F20 PCI card with 96GB of flash and built-in SAS controllers (which is one of the secrets behind Oracle Exadata V2's breakthrough performance), to the mighty fast Oracle Sun F5100 flash array (which is the secret behind Oracle's current TPC-C and other world records) with a whopping 1.96TB of pure flash memory and over a million IOPS. Nice!

And since the dedup table is stored in the ZFS ARC and consequently spills off into the L2ARC if available, using SSDs as cache devices will also benefit deduplication performance.
#5: Use SSDs to Improve Write Performance

Most write performance problems are related to synchronous writes. These are mostly found in file servers and database servers.

With synchronous writes, ZFS needs to wait until each particular IO is written to stable storage, and if that's your disk, then it'll need to wait until the rotating rust has spun into the right place, the harddisk's arm moved to the right position, and finally, until the block has been written. This is mechanical, it's latency-bound, it's slow.

See Roch's excellent article on ZFS NFS performance for a more detailed discussion on this.

SSDs can change the whole game for synchronous writes because they have 100x better latency: No moving parts, no waiting, instant writes, instant performance.

So if you're suffering from a high load in synchronous writes, add SSDs as ZFS log devices (aka ZIL, Logzillas) and watch your synchronous writes fly. Check out the zpool(1M) man page under the "Intent Log" section for more details.

Make sure you mirror your ZIL devices: They are there to guarantee the POSIX requirement for "stable storage" so they must function reliably, otherwise data may be lost on power or system failure.

Also, make sure you use high quality SLC Flash Memory devices, because they can give you reliable write transactions. Cheaper MLC cells can damage existing data if the power fails during write operations, something you really don't want.
#6: Use Mirroring

Many people configure their storage for maximum capacity. They just look at how many TB they can get out of their system. After all, storage is expensive, isn't it?

Wrong. Storage capacity is cheap. Every 18 months or so, the same disk only costs half as much, or you can buy double the capacity for the same price, depending on how you view it.

But storage performance can be precious. So why squeeze the last GB out of your storage if capacity is cheap anyway? Wouldn't it make more sense to trade in capacity for speed?

This is what mirroring disks offer as opposed to RAID-Z or RAID-Z2:

* RAID-Z(2) groups several disks into a RAID group, called vdevs. This means that every I/O operation at the file system level is going to be translated into a parallel group of I/O operations to all of the disks in the same vdev.
The result: Each RAID group can only deliver the IOPS performance of a single disk, because the transaction always has to wait until all of the disks in the same vdev are finished.
This is both true for reads and for writes: The whole pool can only deliver as many IOPS as the total number of striped vdevs times the IOPS of a single disk.
There are cases where the total bandwidth of RAID-Z can take advantage of the aggregate performance of all drives in parallel, but if you're reading this, you're probably not seeing such a a case.
* Mirroring behaves differently: For writes, the rules are the same: Each mirrored pair of disks will deliver the write IOPS of a single disk, because each write transaction will need to wait until it has completed on both disks. But a mirrored pair of disks is a much smaller granularity than your typical RAID-Z set (with up to 10 disks per vdev). For 20 disks, this could be the difference between 10x the IOPS of a disk in the mirror case vs. only 2x the IOPS of a disk in a wide stripes RAID-Z2 scenario (8+2 disks per RAID-Z2 vdev). A 5x performance difference!
For reads, the difference is even bigger: ZFS will round-robin across all of the disks when reading from mirrors. This will give you 20x the IOPS of a single disk in a 20 disk scenario, but still only 2x if you use wide stripes of the 8+2 kind.
Of course, the numbers can change when using smaller RAID-Z stripes, but the basic rules are the same and the best performance is always achieved with mirroring.

For a more detailed discussion on this, I highly recommend Richard Elling's post on ZFS RAID recommendations: Space, performance and MTTDL.

Also, there's some more discussion on this in my earlier RAID-GREED-article.

Bottom line: If you want performance, use mirroring.
#7: Add More Disks

Our next tip was already buried inside tip #6: Add more disks. The more vdevs ZFS has to play with, the more shoulders it can place its load on and the faster your storage performance will become.

This works both for increasing IOPS and for increasing bandwidth, and it'll also add to your storage space, so there's nothing to lose by adding more disks to your pool.

But keep in mind that the performance benefit of adding more disks (and of using mirrors instead of RAID-Z(2)) only accelerates aggregate performance. The performance of every single I/O operation is still confined to that of a single disk's I/O performance.

So, adding more disks does not substitute for adding SSDs or RAM, but it'll certainly help aggregate IOPS and bandwidth for the cases where lots of concurrent IOPS and bigger overall bandwidth are needed.
#8 Leave Enough Free Space

Don't wait until your pool is full before adding new disks, though.

ZFS uses copy on write which means that it writes new data into free blocks, and only when the überblock has been updated, the new state becomes valid.

This is great for performance because it gives ZFS the opportunity to turn random writes into sequential writes - by choosing the right blocks out of the list of free blocks so they're nicely in order and thus can be written to quickly.

That is, when there are enough blocks.

Because if you don't have enough free blocks in your pool, ZFS will be limited in its choice, and that means it won't be able to choose enough blocks that are in order, and hence it won't be able to create an optimal set of sequential writes, which will impact write performance.

As a rule of thumb, don't let your pool become more full than about 80% of its capacity. Once it reaches that point, you should start adding more disks so ZFS has enough free blocks to choose from in sequential write order.
#9: Hire A ZFS Expert

There's a reason why this point comes up almost last: In the utter majority of all ZFS performance cases, one or more of #1-#8 above are almost always the solution.

And they're cheaper than hiring a ZFS performance expert who will likely tell you to add more RAM, or add SSDs or switch from RAID-Z to mirroring after looking at your configuration for a couple of minutes anyway!

But sometimes, a performance problem can be really tricky. You may think it's a storage performance problem, but instead your application may be suffering from an entirely different effect.

Or maybe there are some complex dependencies going on, or some other unusual interaction between CPUs, memory, networking, I/O and storage.

Or perhaps you're hitting a bug or some other strange phenomenon?

So, if all else fails and none of the above options seem to help, contact your favorite Oracle/Sun representative (or send me a mail) and ask for a performance workshop quote.
If your performance problem is really that hard, we want to know about it.
#10: Be An Evil Tuner - But Know What You Do

If you don't want to go for option #9 and if you know what you do, you can check out the ZFS Evil Tuning Guide.

There's a reason it's called "evil": ZFS is not supposed to be tuned. The default values are almost always the right values, and most of the time, changing them won't help, unless you really know what you're doing. So, handle with care.

Still, when people encounter a ZFS performance problem, they tend to Google "ZFS tuning", then they'll find the Evil Tuning Guide, then think that performance is just a matter of setting that magic variable in /etc/system.

This is simply not true.

Measuring performance in a standardized way, setting goals, then sticking to them helps. Adding RAM helps. Using SSDs helps. Thinking about the right number and RAID level of disks helps. Letting ZFS breathe helps.

But tuning kernel parameters is reserved for very special cases, and then you're probably much better off hiring an expert to help you do that correctly.
Bonus: Some Miscellaneous Settings

If you look through the zfs(1M) man page, you'll notice a few performance related properties you can set.
They're not general cures for all performance problems (otherwise they'd be set by default), but they can help in specific situations. Here are a few:

* atime: This property controls whether ZFS records the time of last access for reads. Switching this to off will save you extra write IOs when reading data. This can have a big impact if your application doesn't care about the time of last access for a file and if you have a lot of small files that need to be read frequently.
* checksum and compression can be double-edged swords: The stronger the checksum, the better your data is protected against corruption (and this is even more important when using dedup). But a stronger checksum method will incur some more load on the CPU for both reading and writing.
Similarly, using compression may save a lot of IOPS if the data can be compressed well, but may be in the way for data that isn't easily compressed. Again, compression costs some extra CPU time.
Keep an eye on CPU load while running tests and if you find that your CPU is under heavy load, you might need to tweak one of these.
* recordsize: Don't change this property unless your running a database in this filesystem. ZFS automatically figures out what the best blocksize is for your filesystems.
In case you're running a database (where the file may be big, but the access pattern is always in fixed-size chunks), setting this property to your database record size may help performance a lot.
* primarycache and secondarycache: We already introduced the primarycache property in tip #2 above. It controls whether your precious RAM cache should be used for metadata or for both metadata and user data. In cases where you have an SSD configured as a cache device and if you're using a large filesystem, it may help to set primarycache=metadata so the RAM is used for metadata only.
secondarycache does the same for cache devices, but it should be used to cache metadata only in cases where you have really big file systems and almost no real benefit from caching data.
* logbias: When executing synchronous writes, there's a tradeoff to be made: Do you want to wait a little, so you can accumulate more synchronous write requests to be written into the log at once, or do you want to service each individual synchronous write as fast as possible, at the expense of throughput?
This property lets you decide which side of the tradeoff you want to favor.

Your Turn

Sorry for the long article. I hope the table of contents at the beginning makes it more digestible, and I hope it's useful to you as a little checklist for ZFS performance planning and for dealing with ZFS performance problems.

Let me know if you want me to split up longer articles like these (though this one is really meant to remain together).

Now it's your turn: What is your experience with ZFS performance? What options from the above list did you implement for what kind of application/problem and what were your results? What helped and what didn't and what are your own ZFS performance secrets?

Share your ZFS performance expertise in the comments section and help others get the best performance out of ZFS!
Related Posts

* Seven Useful OpenSolaris ZFS Home Server Tips
* OpenSolaris ZFS Deduplication: Everything You Need to Know
* Home Server: RAID-GREED and Why Mirroring is Still Best

Thứ Tư, 20 tháng 10, 2010

[Solaris] Installing Cool Stack Apache, MySQL, PHP

Coolstack is Sun's perferred suite of precompiled Apache 2.2.3, MySQL 5, and PHP5 all ready to go in a bundle. Each package is compiled optimized for performance on each architecture, and is fully tested by Sun.
Preflight
Prerequisites

For Coolstack 1.2, you must first install the Coolstack runtime package:

CSKruntime_1.2_x86.pkg
Cool Stack 1.2
Cool Stack 1.1

To install on Joyent Accelerators all you have to do is execute a few simple steps. Download Coolstack from http://cooltools.sunsource.net/coolstack/.

Once you get the package in your Accelerator, decompress it and pkgadd it:

# bzip2 -d CSKamp_x86.pkg.bz2
# pkgadd -d ./CSKamp_x86.pkg

By default it installs to /opt/coolstack. If you go inside this directory you will see directories for each individual package:

# ls /opt/coolstack
apache2/ etc/ info/ man/ php5/ share/
bin/ include/ lib/ mysql_32bit/ sbin/

If you go in to each application's directory, it will have a README file. In this file it will tell you steps for setting up and also the compile options used for Sun Studio.
Apache

The Apache used is 2.2.3 compiled with prefork. Apache is all ready to go, all you have to do is start it.

# /opt/coolstack/apache2/bin/apachectl start

MySQL

The MySQL used is version 5.0.33 32-bit. If you need to work with larger databases (use more than 4gigs of ram), consider the MySQL 64-bit package that the CoolStack webpage provides.

To get MySQL started, you need to first copy over a my.cnf file to use. my-medium.cnf should work for most Accelerators, check my-small.cnf if you are on a smaller container and want to conserve more memory.

# cp /opt/coolstack/mysql_32bit/share/mysql/my-medium.cnf /opt/coolstack/mysql_32bit/my.cnf

Also copy over the mysql.server start and stop script:

# cp /opt/coolstack/mysql_32bit/share/mysql/mysql.server /opt/coolstack/mysql_32bit

Create the system tables and give permissions to the directories:

# /opt/coolstack/mysql_32bit/bin/mysql_install_db
# chown -R mysql:mysql /opt/coolstack/mysql_32bit

And start the server:

# /opt/coolstack/mysql_32bit/bin/mysql.server start
Starting MySQL
SUCCESS!

Thứ Năm, 14 tháng 10, 2010

Configuring Sendmail to Relay Messages from Other Servers

Part 1. Configuring Sendmail on Solaris 10
Part 2. Configuring Sendmail to Masquerade Your Messages
Part 3. Configuring Sendmail to Relay Messages to Another Server
Part 4. Configuring Sendmail to Relay Messages from Other Servers

Introduction
In the previous post you've learnt how to configure Sendmail to relay messages to another server. Now, such a server should be probably be configured to accept incoming messages to relay from other servers. Solaris 10 Sendmail default configuration does not allow message relay and proper configuration must be applied to Sendmail.

Configuring Relay for Hosts and Domains
The quickest way to have Sendmail relay messages for other domains is by modifying the /etc/mail/relay-domains file. Sendmail will relay mail for every domain listed in that file. If you want your server to relay messages for domain a.com, b.com and c.com, just insert the corresponding lines into /etc/mail/relay-domains and restart your Senmail instance:

# cat /etc/mail/relay-domains
a.com
b.com
c.com

Configuring the Access Database
If you want to relay messages from specific hosts (as well as domains and networks) you can use the access database. The access database lists email addresses, network numbers and domain names and a rule. Available rules are:

* OK: Accept mail even if other rules in the running ruleset would reject it.
* RELAY: Accept mail addressed to the indicated domain or received from the indicated
domain for relaying.
* REJECT: Reject the sender or recipient with a general purpose message.
* DISCARD: Discard the message completely using the $#discard mailer.
* (A RFC-821 compliant error text): Return the error message.


If you want your Sendmail to relay mails for a domain or from some specific hosts, modify your /etc/mail/access accordingly:
your-domain RELAY
192.168.0 RELAY
another-domain RELAY
unwanted-host REJECT

Once done, you have to generate the access db with the following command:

# makemap hash /etc/mail/access.db < /etc/mail/access

Enabling the Access Database
To have your Sendmail use the access database, you must properly configure it adding the access_db feature to its configuration file:

# cat your-file.mc
[...snip...]
FEATURE(`access_db')
[...snip...]

Restart your Sendmail and enjoy!

A Word of Warning: DNS Configuration
Sendmail often requires that host name you use in your configuration files (such as the access database) are properly configured in your name server, both for lookup and reverse lookup. I hope this will spare you some headache while debugging.

Thứ Năm, 16 tháng 9, 2010

VSFTP command

ABOR,ACCT,ALLO,APPE,CDUP,CWD,DELE,EPRT,EPSV,FEAT,HELP,LIST,MDTM,MKD,MODE,NLST,NOOP,OPTS,PASS,PASV,PORT,PWD,QUIT,REIN,REST,RETR,RMD,RNFR,RNTO,SITE,SIZE,SMNT,STAT,STOR,STOU,STRU,SYST,TYPE,USER,XCUP,XCWD,XMKD,XPWD,XRMD

Reading FTP Logs in xferlog Format

For some reason I can never remember the xferlog format that is used by daemons such as Pure-FTP. Although xferlog is well documented, I can never seem to find the doc when I need it, and it's never bad to have information duplicated in many places!

Anyways, on with the description. Here is a sample log entry from my server (with access IPs and dirs changed):

Fri May 14 05:16:12 2010 0 ::ffff:1.2.3.4 11974 /home/user/public_html/index.php a _ i r user ftp 0 * c


I'll step through each item individually. The delimiter here is whitespace, so each new token represents a unique piece of data, with the exception of the date at the beginning.

Fri May 14 05:16:12 2010

Date/time stamp, nothing complicated.

0

Transfer time, in whole seconds (this transfer took less than a second, so zero).

::ffff:1.2.3.4

Remote host where the user connected from.

11974

Size of the transferred file (in bytes).

/home/user/public_html/index.php

Full path to the uploaded file.

a

Transfer type, a = ASCII (plain-text files), b = binary (everything else)

_

Action flag, C = compressed, U = uncompressed; T = tar'ed; _ = no action was taken.

i

Direction, i = incoming, o = outgoing, d = deleted.

r

Access mode, a = anonymous user, r = real (normal) user.

user

Local username authenticated with.

ftp

The service being invoked (almost always FTP).

0

Authentication method, 0 = none, 1 = RFC931 authetication.

*

User ID or * if not available (virtual user).

c

Completion status, c = completed, i = incomplete.

That's all there is to it!

Thứ Ba, 30 tháng 3, 2010

Step by step install innotop

Innotop is a very useful tool to monitor innodb information in real time. This tool is written by Baron Schwartz who is also an author of “High Performance MySQL, Second edition” book. [Side note: I highly recommend getting this book when it comes out (in June, 08?). Other authors include: Peter Zaitsev, Jeremy Zawodny, Arjen Lentz, Vadim Tkachenko and Derek J. Balling.] Quick summary of what innotop can monitor (from: http://innotop.sourceforge.net/): InnoDB transactions and internals, queries and processes, deadlocks, foreign key errors, replication status, system variables and status and much more.

Following are the instructions on how to install innotop on CentOS x64/Fedora/RHEL (Redhat enterprise). Most probably same instructions can be used on all flavors of Linux. If not, leave me a comment and I will research a solution for you. Let us start with downloading innotop. I used version 1.6.0 which is the latest at the time of writing.

wget http://internap.dl.sourceforge.net/sourceforge/innotop/innotop-1.6.0.tar.gz

Now let us go ahead and unzip and create the MakeFile to get it ready for install

tar zxpf innotop-1.6.0.tar.gz
cd innotop-1.6.0
perl Makefile.PL

At this point if you get the following output, you are good to continue:

Checking if your kit is complete...
Looks good
Writing Makefile for innotop

If you get something similar to following, you will need to take care of the prerequisites:

Looks good
Warning: prerequisite DBD::mysql 1 not found.
Warning: prerequisite DBI 1.13 not found.
Warning: prerequisite Term::ReadKey 2.1 not found.
Writing Makefile for innotop

Just because they are warnings does not mean you ignore them. So let us install those prerequisites. We will use perl’s cpan shell to get this installed (visit my post on how to install perl modules for more details). If it is your first time starting this up, you will have to answer some questions. Defaults will work fine in all cases.

perl -MCPAN -eshell
install Term::ReadKey
install DBI
install DBD::mysql

Note: you must install DBI before you can install DBD::mysql.

If you get an error like following when you are installing DBD::mysql:

Error: Can't load '/root/.cpan/build/DBD-mysql-4.007/blib/arch/auto/DBD/mysql/mysql.so' for module DBD::mysql: libmysqlclient.so.15: cannot open shared object file: No such file or directory at /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DynaLoader.pm line 230.

You will have to create a symlink to the object file in your lib64 (or lib if you are not using x64 version) folder:

ln -s /usr/local/mysql/lib/mysql/libmysqlclient.so.15 /lib64/

Once all prerequisites are done, type perl Makefile.PL and you should have no warnings. Continue the install:

make install

At this point you should have innotop installed on your system. Let us do some quick set up so you can start using innotop. We start with configuring your .my.cnf to include connection directives.

vi ~/.my.cnf

Add the following (edit to reflect your install) and save/exit

[mysql]
port = 3306
socket = /tmp/mysql.sock

Start up innotop by typing innotop at your shell prompt. First prompt will ask you to “Enter a name:”. I just put localhost since this will be used to connect locally. Next prompt asks you about DSN entry. I use: DBI:mysql:;mysql_read_default_group=mysql

This tells innotop to read .my.cnf file and use group [mysql] directives. Next prompt is optional (I just press enter). Next two prompts you enter information if you need to.

At this point your innotop installation / testing is complete. You can read man innotop to get more details on how to use innotop.