Can lookaside list be used to preallocate a contiguous buffer of certain length

I am writing a driver routine which adds an entry(upto maximum capacity) to the list at run time. (i.e analogous to dynamically increasing array)
Once added, we store the base address of the dynamic list in a pointer which is then used for processing later on in other functions.
I am looking for a driver construct that allows to add entry dynamically but at the same time is contiguous.
I came across lookaside list in windows but not sure where to specify the depth, also the documentation is not clear whether the entries added to lookaside list are contiguous or not.
I also explored using RtlAllocateHeap to allocate certain number of entry at initialization, but what if the new entry to be added exceeds the maximum number of entry specified at initialization.
If this is something been done commonly in windows driver world, is there any data structure/routine which is readily available that I can use for this purpose?

Why do they need to be contiguous? The normal practice is to use a linked list.

A kernel lookaside list is a singly linked list of subblocks. The entire list of blocks is not contiguous.

Performance?
Pointer derefencing is extremely slow.

There is nothing of sort that I know, we had to roll our own. But we have
no solution for when more than preallocated memory is needed. Also, if
items are removed the storage is no longer continuous, but it is still
faster than linked lists, because offsets are used.

I am looking for a driver construct that allows to add entry dynamically but at the same time is contiguous.

Virtually contiguous, right?

Ahhhh… I guess it depends on the size, and how often you add/remove entries? Don’t you just want to pre-allocate the “array” to its maximum size, and then deal with incrementing the “last entry used” pointer?

What, exactly, are you trying to optimize… and why?

So, the original legacy code is using static array and iterating through the static array to perform processing. It could access any element of array given an index to the array(given the array is contiguous in memory)
I was evaluating my options as to whether there is any construct in windows that will allow me to add more element to contiguous array(but looks like there is no such mechanism).
In case I go with option to use lookaside list, how do we iterate over a lookaside list once its allocated? ( I couldn’t find an example of we could iterate over lookaside list in windows). Since its a list, we can’t index it directly like an array, right?

There is never a guarantee that you can expand a heap block. There might have been another block allocated immediately after. This is a fundamental truth of heap allocation, and is the reason the C runtime “realloc” routine has to do an allocate/copy/delete when needed.

There’s no way to iterate a lookaside list. Since allocated and free blocks are interspersed, the semantics would be difficult. The raison d’etre of the lookaside list is to reduce thrashing when a list has a lot of allocate/delete activity. Once allocated, you’re expected to treat the block just like a heap block, and use linked lists to stitch them together.

I can’t help but think you’re guilty of premature optimization here. It is very difficult for me to conceive of a design where using a linked list vs an array block would represent anything more than an insignificant fraction of the CPU time.

A lookaside list is not a data structure that you use to hold references to memory that you are actively using. The lookaside list is used to reduce interactions with the main memory allocator when you know that you will frequently allocate and free blocks of a certain size. It is essentially a special purpose heap.

There is no mechanism in any OS or programming environment to allow you to expand a memory allocation without at least sometimes requiring a full copy of the data to a new base address. If someone else is using the adjacent memory, you can’t simply take it.

If you need to be able to expand your array, you need to weigh the cost of accessing a random element versus the cost of expanding. Arrays are fast at accessing random elements (O(1)), but slow to expand (O(n). Linked lists are fast to expand (O(1), but slow to access a random element (O(n)). There are other data structures that can be used too. In particular, binary trees (sorted data) and hash tables, but there are more complex ones too. These become more useful the larger your data set becomes because they have higher costs to setup and access so their asymptotel performance is only realized with a larger set. Tries are also useful for variable length keys (especially dictionary data)

with everything related to performance, the question is to analyze your access patterns and acceptable points of failure

What do you mean by “iterate”?
You can presume the first item is index 0 and so on and iterate it like
array index, just a ton slower.

Presuming you need indexed acceas often, hash tables may help a lot.

What is the access pattern, what is the data, how often is it accessed?

Regards, Dejan.

In case I go with option to use lookaside list, how do we iterate over a

I agree with Mr. Roberts: This all sounds like premature optimization.

If performance is really THAT critical to you, I do not think you’ll be very happy with the performance of Windows lookaside lists (I hear everyone sigh, because they know this is one of my pet peeves). Just allocate yourself a big slab of memory and slice it up into allocation units. If you run out, allocate yourself another big slab and slice that up. Or whatever.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.