since this is my first post here, please forgive me, if I should make any mistakes.

About 10 years ago, for some ancient hardware (Atari 2600) I came up with (and implemented) an idea how to decode (specially tailored) Huffman encoded bit streams most efficiently without having to use a Huffman tree. For a current project I came back to the idea and started searching the web for some documentation about it. I found similar approaches, but all used pretty large tables. But because I didn't find anything really matching, I now wonder if I just reinvented the wheel or if my idea might be really unique.

The resulting (pseudo) code for decoding a value is pretty simple:
// the table values are an example for 0, 10, 110 and 111
#define MIN_BITS 1
#define MAX_BITS 3
int nextTable[MAX_BITS] = {0, 2, 8}; 
int offsetTable[MAX_BITS-MIN_BITS+1] = {0, 1, 4};

void decode() {    
    int code = 0, bit = 0;
    do {
        code <<= 1;
        code += getNextBit(); // 0 or 1	
    } while (code > nextTable[bit++]);
    // code now contains the Huffman code

    // convert code into value table index
    return code - offsetTable[bit-MIN_BITS];
The return is used as an index to read the actual value from a table.

The new idea is in the small tables called nextTable and offsetTable. Those define the points were the Huffman codes switch to the next bit length and convert the resulting Huffman code into a table index.

The only precondition for this code working is, that the Huffman codes are ordered, shorter codes must have a lower value than longer codes.

Since I suppose this algorithm is well known already, I would be very thankful to get a name or some links for it. Thanks in advance!