How fast can you go?

infinipool Tachometer is a read only tool that analyses your local storage or AWS S3 buckets to see how efficiently you are using them. Support for other data sources is planned.

This small tool will generate a report that can be uploaded here. This will produce an easy to read graphical report on where you can save and become more efficient.

Download Link md5
Linux tachometer-0.2.0-linux.bz2 e66afb7bf04372ffee23d57a38982de9
Mac tachometer-0.2.0-mac.bz2 b05f0e23b1ef35ca6287fbed814430aa
Windows tachometer-0.2.0.zip 960f5b78c73e3fe06bc7052ac2d5ebca

Tachometer is currently in beta phase. We appreciate your feedback.

Configuration

For accessing S3, Autobahn and Tachometer need your AWS credentials. The programs will look for credentials in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. If they are not set, it will try to read them from the file $HOME/.aws-keys with contents of the form keyName awsKeyID awsKeySecret.

Basic Usage

Tachometer scans your data to estimate how well Autobahn will be able to reduce its volume. The command line options -s and -b allow you to specify local directories and AWS S3 buckets as data sources, respectively.

Note that Autobahn detects and uses similarities in all the data you give it, so that the efficiency increases the more you use it. If you are planning to use Autobahn on data from multiple directories, be sure to include them all in the same call to Tachometer, as in

tachometer -s some-directory -s other-directory -b some-bucket
instead of invoking Tachometer on each directory separately.

The accuracy of Tachometers prediction can be tuned by the -n option. Higher values increase the accuracy, but increase the running time, memory footprint, and number of S3 requests.

Please be aware that Tachometer must read all the data it analyses. If you're processing large volumes of data from S3, you might want to run the program on an EC2 instance in order to avoid traffic costs.