Post by John BellardoIf you delete an object using "delete <ptr>" the destructor will be
called, and then the memory will be reclaimed by the pool. If you
delete a pool, the pool will free all of its memory without regard to
the objects (and hence the destructors) contained within the pool. So
C++ destructors still work.
I was perhaps a little terse with Nickolay -- the discovery that DSQL
and the engine shared the same thread data slot kinda a loused up my
afternoon.
I think it's very important that destructors are reliably called. It
isn't reasonable to expect developers to remember which types of objects
have reliable destructors, which have unreliable destructors, and which
have destructors that are never called. You have to admit that it does
had a layer of muddle that would be better off avoiding.
But here is another problem with pools -- memory efficiency. As
originally designed, the temporary pools were short lived. Each request
had a dedicated pool that live only between compile_request and
release_request. DSQL added a second temporary pool, but with an
equivalent lifetime. Pools where never memory efficient. Memory was
allocated to pools in large hunks, so over allocate was a certainly, on
average, half the hunk size. Furthermore, request processing (both blr
and dsql) involved a parse tree, miscellaneous work products like
strings, and an execution tree. For dsql, a blr request was then
compiled. But when the statement was released, the whole shebang
evaporated, so the memory inefficiency was short lived.
In our brave new world, a dsql statement (a DStatement object) either
finds an existing compiled statement (CStatement) or creates and
prepares a new one. But where now the respective statements and
requests are immediately released for recycling, now the CStatement
along with its compiled BLR request takes up long term residence in the
compiled statement cache in hopes of being of service for a future
identical SQL string. Now the memory efficiency begins to hurt. The
last half used hunk is wasted. The unreleased dsql syntax and language
nodes are wasted. The last half hunk in the compiled blr hunk is
wasted. The generated blr string is wasted. The blr parse treee is
wasted. When all is said and done, 90% of the memory used during
preparation and compilation of the request is lost to save the precious
10% (or less) actually consumed by the exe and rsb trees.
It wasn't my intention to implement a compiled statement cache, but it
wasn't really possible to implement CStatement any other way. All
CStatement has to do is bump its use count and insert itself into a SQL
string hash table, and we're there. The existing request level
instantiation (one request, many impure areas) handles multiple
instantiations of the request.
Ann will tell you that I'm a child of the computing depression, that I
worried myself sick over every extra byte so my PDP-11 Datatrieve users
could have a little more space for their obsurdly complicated systems.
I do argue over and over than memory is huge and almost free, but I hate
the idea of wasting it.
OK, I suppose we could have separate compile and execute pools for both
dsql statements and blr requests. But if by reasonable use of
destructors we can signficantly reduce the amount of memory wasted by
cached compiled statements, we're way ahead of the game. As I said
before, no matter how fast the compiler is, not compiling will always be
much, much faster.