Saturday, December 19, 2009

BitMagic library version 3.6.2 released

New features:
- added configurations for MacOS-X
 - optimizations for SSE 4.2 in 64-bit mode

now it is fine to use:
#define BM64OPT
#define BMSSE42OPT

This two options produce SIMD vectorized code for SSE 4.2 which uses 64-bit aware bit-counting

- new algorithm for picking random subset out of bvector<>
see  sample10.cpp

Wednesday, December 16, 2009

SSE2 vs CUDA in parallel bit stream transposition / similarity. Part 2.

We made a second attempt to work with CUDA to better understand if it looks like a suitable technology for database and data-mining acceleration.
With a very valuable help from our readers we prepared a second version of GPU benchmark and tried to analyze performance and power consumption.

CUDA vs SSE. Part 2.

Friday, December 11, 2009

Win7 vs Ubuntu 9.10 (32-bit vs 64-bit)

This benchmark is designed to illustrate platform-compiler choice for BitMagic derived computational algorithms.

Test total numbers (shorter - faster)

From Blog

Test Platform: Nehalem Core i5, 2.67GHz (Turbo Boost enabled)
BitMagic Library 3.6.2 (not yet released)

Windows 7 32-bit used default 32-bit compile using MSVC8. SSE4 optimization disabled.

Ubuntu 9.10 64-bit GCC 4.4.1 with compile options ( -march=core2 -m64 -msse4 )

BitMagic 64-bit optimizations + SSE4 optimizations:
#define BMSSE42OPT
#define BM64OPT

(this combination will be supported in BitMagic Library 3.6.2).

All measurements in seconds (shorter - faster).

From Blog

As you can see, 32-bit default compile using stock compiler looses in almost every test to 64-bit SSE4.2 version, tuned for the platform.

Yes, this experiment intentionally staged for Windows7+MSVC to loose (by using unfavorable compilation and optimization settings: stock x86 32-bit software suffers from register starvation, ofter ignores presence of SIMD SSE unit. Having said that such resource under-utilization often happens in real-life and it is often ignored as insignificant. From this data we can see that sometimes insignificant things can add-up to measurable values.

Sunday, November 22, 2009

BitMagic library version 3.6.1 released

  • Elias Gamma decoding optimizations significantly improved bit-vector de-serialization performance. Compressed bit-vectors now save disk space and IO bandwidth without big CPU costs
  • Fixed stack overrun introduced in 3.6.0
  • First version of SSE 4.2 optimizations for Nehalem (Core i5, i7, etc). Tested with MSVC 2008 and GCC 4.4.1. Use BMSSE42OPT to turn on SSE4.2 support in the code.

Friday, November 13, 2009

64-bit portability checklist

Useful article from developers of PVS Studio tool on porting to 64-bit systems.

20 issues of porting C++ code on the 64-bit platform

SSE4.2 readyness. GCC vs MSVC. Deathmatch.

GCC 4.3.2 (cygwin) vs MS VC++ 2008 (Windows 7).

First results porting BitMagic Library to Intel Nehalem SSE4.2 instruction set. :-)

1>Generating code
1>c:\dev\bm\src\bmsse4.h(140) : fatal error C1001: An internal error has occurred in the compiler.
1>(compiler file 'f:\dd\vctools\compiler\utc\src\p2\main.c[0x60375DBB:0x00000000]', line 182)
1> To work around this problem, try simplifying or changing the program near the locations listed above.
1>Please choose the Technical Support command on the Visual C++
1> Help menu, or open the Technical Support help file for more information
1>LINK : fatal error LNK1000: Internal error during IMAGE::BuildImage
1> Version 9.00.30729.01
1> ExceptionCode = C0000005
1> ExceptionFlags = 00000000
1> ExceptionAddress = 60375DBB (60300000) "C:\Program Files\Microsoft Visual Studio 9.0\VC\bin\c2.dll"
1> NumberParameters = 00000002
1> ExceptionInformation[ 0] = 00000000
1> ExceptionInformation[ 1] = 00000000
1> Eax = 031F666C Esp = 0034EC78
1> Ebx = 00000000 Ebp = 0034ECA4
1> Ecx = 00000000 Esi = 03214298
1> Edx = 00000000 Edi = 03201608
1> Eip = 60375DBB EFlags = 00010246
1> SegCs = 0000001B SegDs = 00000023
1> SegSs = 00000023 SegEs = 00000023
1> SegFs = 0000003B SegGs = 00000000
1> Dr0 = 00000000 Dr3 = 00000000
1> Dr1 = 00000000 Dr6 = 00000000
1> Dr2 = 00000000 Dr7 = 00000000
1>Build log was saved at "file://c:\dev\bm\bmperf\Release\BuildLog.htm"
1>bmperf - 2 error(s), 5 warning(s)

$ make
======= Building : BitMagic Library Performance Test cygwin GNU_CC
< perf.cpp >
g++ -msse4 -march=core2 -Wall -c -D_REENTRANT -DCYGWIN_NT -D_GNU_SOURCE -Wno-d
eprecated -I/cygdrive/c/dev/bm/src perf.cpp -o /cygdrive/c/dev/bm/tests/perf
/Release/perf.o -g0 -O2 -march=core2 -fomit-frame-pointer -pipe
perf.cpp: In function 'void EnumeratorTest()':
perf.cpp:497: internal compiler error: in memory_address_length, at config/i386/
Please submit a full bug report,
with preprocessed source if appropriate.
See for instructions.
make: *** [/cygdrive/c/dev/bm/tests/perf/Release/perf.o] Error 1

Monday, October 19, 2009

Notes on Elias Gamma encoding in BitMagic 3.6.0

Version 3.6.0 implements a variant of Elias Gamma Encoding as part of bit-vector serialization.
Wikipedia has an article on this encoding here:

The description is so good that one should be able to implement this encoding after reading a couple of pages.

Older versions of BM library used D-Gap compression which is a variant of run length encoding. Or used plain lists of integers to encode very sparse sets. All this compression methods used word aligned algorithms and rely on presence of long runs of identical bits. Word aligned compression is also very fast.

A lot of practical tests demonstrated that BM based search systems are bandwidth limited. It means that CPU is starved, waiting for memory operations or disk retrieval. Disk retrieval is particularly important because IO and network subsystems never followed terms of Moore Law. Non-Word-Aligned encoding seems to a good answer to this problem.

Elias Gamma encoding developed by Peter Elias postulates bit-encoding of positive (non-zero) integers . Positive non-zero integers are essentially the native format for BM library to store sparse bit-vectors or inverted lists (basic building blocks of efficient bitmap indexes).

1.Simple implementation, requires no compression dictionary. It is easy to compress just a few ints without constructing and storing a dictionary. BM library works with independent blocks of bits (potentially computed in parallel).
2.Implied probability favors small integers – good for BM library since it fragments bit-vector into blocks of 65K bits. Methods allows some blocks to be encoded, and some to remain in GAP or raw bit-block format (if statistical distribution of bits suddenly changes).
3.Compact implementation reduces code bloat, fits into a C++ template library paradigm.

1.True adaptive statistical methods (like arithmetic compression) can(should?) give better compression ratio.
2.Encoding and decoding method is sequential, it seems hard to implement a parallel encoder/decoder using SIMD, SSE or GP-GPU. (BM blocks can potentially be decoded in parallel so maybe there is a way to use parallel methods for this).

Saturday, October 17, 2009

BitMagic Library v.3.6.0 released

  • improved bit-vector serialization with bitset compression.
    New serialization adds Gamma encoding on top of D-Gap compression. This should
    reduce IO for databases and search systems using serialized bit-vectors.
  • fixed functionality bug in deserialization with logical AND
  • new serialization API (old API kept for backward compatibility)
  • Saturday, August 22, 2009

    BitMagic version 3.5.4 released

    The scope of this release is general code cleanup (only partially successful) and SSE2 for GCC compatibility fixes.

    GCC supports SSE2 intrinsics, so this port was relatively easy. Practically the only change needed was correct syntax for 16-byte alignment declaration. SSE2 optimizations are now available for Intel C++, GCC and MSVC.


    Monday, August 17, 2009

    CUDA vs SSE

    We all know that GPUs can be incredibly fast at some single precision floating point algorithms.
    Developers of BitMagic Library Igor Tolstoy and Anatoliy Kuznetsov made an attempt to implement parallel bit-stream transposition and similarity algorithm (a lot of bit-shifting, population counting and general purpose integer logic) to better understand CUDA and GP-GPU computing in comparison with current (and future) Intel vectorization architectures (SSE2).

    Benchmarking, CUDA and SSE source codes, some speculation about Larabee and implications for data-mining and large scale databases here:

    Wednesday, August 12, 2009

    Sunday, August 9, 2009

    Intel Atom: Best GCC settings

    After several experiments with 32-bit GCC 4.3.3 on Intel Atom N270 here some recommendations for optimization parameters.
    -march=core2 -O2 -fomit-frame-pointer -pipe
    Another good alternative:
    -mtune=native -O2 -fomit-frame-pointer -pipe

    In some BM benchmarks -march=core2 is up to 20% better than -march=prescott
    So it looks like, Intel Atom is more Core2 than Pentium4. For low power devices like Atom it looks very important to use right settings since platform balances on a very fine edge in terms of usability. Given the growing popularity of this CPU it would be nice to see binary builds of software products optimized for the target (or doing internal CPUID auto-detection) to load the right binaries.

    SSE2 integer optimization seems to get a nice additional boost too. Current version of BitMagic Library (3.5.3) s not compatible with GCCs syntax for variable alignment (one of the key elements for getting best memory bandwidth with SSE). Next version is going to offer better compatibility.

    Saturday, August 8, 2009

    BitMagic version 3.5.3 released

    • improved performance of AND logical operations and derived DOT product algorithms (cound_and(), any_and())
    • improved compression of serialized bvectors (serialization format remains backward compatible)
    • fixed compatibility issues with MSVC8 and Intel C++ 11.1 (SSE2 optimization mode)
    • improved automatic variables disambiguation using 'restrict'


    Friday, August 7, 2009

    Search for Efficient Bitwise Traversal

    One of the fundamental problems BitMagic library faces is optimization of bit traverse algorithms.
    BitMagic implements a class bvector<>::emunerator to iterate 1 bits just as they are sequence of integers. Under the hood of the high level algorithm is a problem how to unpack a single integer value into a variable length array of 1 bit indexes.

    For example an bit string 10011 should translate into { 0, 3, 4 }.

    This problem is somewhat related to a famous POPCNT family of bit counting algorithms. POPCNT is beaten to death on the Internet, while there are not to many sources doing bit traversal (which looks very relevant too).

    It is hard to implement a fast and generic algorithm to behave equally well for words with high and low bit density. In this post we want to encourage others to try finding alternative methods, optimizing for 64-bit, SSE2, maybe GPU, CUDA, etc.

    Solution 1.
    This algorithm uses 4-bit unrolled loop with a straight-forward check. This code is very compact and works well when both code and data fit into L1 CPU cache. Template parameter T represents integer word type, F is a functor-class to respond to found 1 bit.

    template < typename T, typename F >
    void bit_for_each(T w, F& func)
    for (unsigned octet = 0; w != 0; w >>= 8, octet += 8)
    if (w & 1) func(octet + 0);
    if (w & 2) func(octet + 1);
    if (w & 4) func(octet + 2);
    if (w & 8) func(octet + 3);
    if (w & 16) func(octet + 4);
    if (w & 32) func(octet + 5);
    if (w & 64) func(octet + 6);
    if (w & 128) func(octet + 7);

    Solution 2.
    Second algorithm implements a hard coded switch table (C/C++-compiler most likely transforms this into a JUMP address table. As general purpose solution this algorithm looks faster than variant 1. It gives compiler a better compile time optimization plan.
    The catch is that functor should support multiple arguments (up to4 in this case). All functor operators () must be inline, otherwise all performance benefits easily lost pushing parameters down to the stack. (This solution is inspired by template meta-programming articles, I believe this code can be written as a compact template meta program at the expense of clarity and maybe compatibility...)

    template < typename T, typename F >
    void bit_for_each_4(T w, F& func)
    for (unsigned sub_octet = 0; w != 0; w >>= 4, sub_octet += 4)
    switch (w & 15)
    case 0: // 0000
    case 1: // 0001
    case 2: // 0010
    func(sub_octet + 1);
    case 3: // 0011
    func(sub_octet, sub_octet + 1);
    case 4: // 0100
    func(sub_octet + 2);
    case 5: // 0101
    func(sub_octet, sub_octet + 2);
    case 6: // 0110
    func(sub_octet + 1, sub_octet + 2);
    case 7: // 0111
    func(sub_octet, sub_octet + 1, sub_octet + 2);
    case 8: // 1000
    func(sub_octet + 3);
    case 9: // 1001
    func(sub_octet, sub_octet + 3);
    case 10: // 1010
    func(sub_octet + 1, sub_octet + 3);
    case 11: // 1011
    func(sub_octet, sub_octet + 1, sub_octet + 3);
    case 12: // 1100
    func(sub_octet + 2, sub_octet + 3);
    case 13: // 1101
    func(sub_octet, sub_octet + 2, sub_octet + 3);
    case 14: // 1110
    func(sub_octet + 1, sub_octet + 2, sub_octet + 3);
    case 15: // 1111
    func(sub_octet, sub_octet + 1, sub_octet + 2, sub_octet + 3);

    Again there should be a way to write an assembly code using BSF (bit scan forward), BSR (bit scan reverse). References on relevant samples are highly appreciated.

    Wednesday, August 5, 2009

    Compilers test ICC vs MSVC vs GCC

    BitMagic benchmark test compiled on Windows XP under different compilers (GCC 3.4, MSVC 8 and Intel C++ 11). Tests here are synthetic, but represent very practical functions used in data-mining and database search algorithms. No SSE2 or 64-bit optimization, pure 32-bit mode, single thread. System is running on Core2 Duo.

    BitMagic individual tests (shorter - faster)

    Overall results (shorter - faster)

    Test results are relatively uniform (code is rather well tuned manually), it was a bit of surprise to see Intel C++ 11 slower in XOR count test (Humming distance).
    Overall winner for integer algorithms - GNU C++.

    Friday, July 31, 2009

    Intel Atom vs Intel Core 2 Duo

    The goal of this test is to determine the how the low end single core processor measures against a middle range desktop CPU in a test dominated by logical operations and bit-set arithmetics using BitMagic Library. We are trying to check a hypothesis if it is make sense to try building a small compute cluster for distributed data-mining and search system applications based on of Atom processors.

    The test systems:

    1. Intel® Atom N270 1.6 GHz - Dell Inspiron Mini 10 netbook

    2. Intel® Core™ 2 Duo Processor E8500 (3.16GHz, 6M, 1333MHz FSB) – Dell OptiPlex 960

    Conducted test run a mix of BitMagic library performance tests covering various bit-vector operations. All performance results are in seconds, shorter is faster (better).

    In our test Core2 seems to be 3.4 times faster than Atom, measured core to core in a single threaded program. Intel Atom N270 is a single core hyper-threaded chip (pretends to be dual core). Running two performance tests in parallel does not offer no benefit for a well optimized program (test just runs 2x times slower). Given this fact Atom seems to be 6.8x slower than Core 2.

    Having this numbers we can compare the energy efficiency of both CPUs. Specs for Atom N270 says it runs in the thermal envelope of 2.5W. Core2 Duo claims 65 (26x times difference for Atom).

    Based on some i-net reviews the full system power consumption for Atom N270 netbook should be around 42W and the Core 2 on a full throttle is probably running at around 300W (all numbers needs to be confirmed with a socket power meter). System power consumption offers only 7.14x advantage to Atom. It covers the 6.8 performance gap between Core 2 and Atom, but the profit seems not to be high. This is coming from the fact, that power efficiency of current generation of Atom systems is dominated by the chipset and other components. For a small compute cluster power can be reduced by not running the GUI and replacing the HDD with small bootable Flash drive. This applies to both systems.


    At this moment Atom does not unfold its full potential as a cluster building block, but the next generation Pineview is going to have integrated memory controller and graphics core (SoC design), it means system performance is going to be down to make Atom clusters more sensible choice.

    Saturday, July 25, 2009

    First post to BitMagic C++ Library BLOG.