We’ve seen that
Temp allocations are the fastest kind of allocations, but is this always the case? When the fixed-size block of memory they draw from runs out, are the overflow allocations just as fast? Today we’ll test to find out!
Continuing the series, today we look specifically at “overflow” allocations in the
Temp allocator. We’ve seen that there’s no need to explicitly deallocate
Temp memory because it all gets cleared every frame, but do we need to deallocate “overflow” allocations that didn’t fit inside the block of automatically-cleared memory? Today we’ll find out!
Last week we dove into the code that executes when we deallocate
Allocator.Temp memory to try to find out what happens. We ended up at a dead-end and were only able to draw conclusions about what doesn’t happen when we deallocate. Today we’ll try another approach to see if we gain gain more insight into the
Last week we learned a lot about
Allocator.Temp, but we left some questions open. One of them was what happens when we explicitly deallocate
Temp memory. We know we don’t need to and that it’ll be deallocated at the end of the frame, but what happens when we explicitly deallocate it? Today we’ll dive in and try to find out.
When we use
Allocator.Temp with a collection like
NativeArray, how long does the allocation last? We’ve seen that
Temp allocations are automatically disposed without the need to explicitly call
Dispose, but when does the automatic dispose happen? Today we’ll test to find out!
IDisposable is becoming more and more prevalent in Unity. Previously, it was typically only used for I/O types like
FileStream. Now it’s used for in-memory types like
NativeArray<T> to avoid the garbage collector. Needing to call
Dispose manually means we’re explicitly managing memory, just like we’d do in lower-level languages like C++. That comes with some challenges, especially with shared ownership, which we’ll deal with today.
Today we continue to explore how we can store values in less than a byte. We’ll expand the
BitStream struct with the capability to write values in addition to just reading them. Read on to see how to implement this functionality and for the full source code which you can use in your projects.
The smallest a C# type can be is one byte. The
byte type and and an empty struct are examples of this. But what if we want to store data in less than a byte to improve performance such as load times and CPU cache utilization? Today’s article does just this by packing at the bit level!
Floating-point math is fast these days, but fixed-point still has a purpose: we can use it to store real numbers in less than 32 bits. Saving a measly 16 or 24 bits off a
float might not sound appealing, but cutting the data size in half or quarter often does when multiplied across large amounts of real numbers. We can shrink downloads, improve load times, save memory, and fit more into the CPU’s data caches. So today we’ll look at storing numbers in fixed-point formats and see how easy it can be to shrink our data!