In the old days, and remember the main title of my blog: "from old school...", data would be saved in files. For the new kids on the block: A file is a storage on your hard disk, a hard disk was a device with spinning disks inside. A read/write head could store and read bytes to and from it.
So in these days we just write a block of bytes to these files, we used the "BlockWrite" command to do this! Why, because it is and was the fastest way to write a record binary to disk.
And NO, streams are not faster. Because down to the RTL, a Filestream is using the same functions, but need more calls to get there. Maybe you call "BlockRead" and "BlockWrite" the old style, but I don't care.
Before we got hard disks, we hat floppies. You got the best performance out of a floppy if you could provide a buffer to read the whole track in one rotation. If your CPU or your floppy controller was not fast enough, the sectors on the track had to be interlaced, and in this case you needed more than one rotation - too bad.
What was the title of this post?
Oh yes. We stored data, mainly records, in binary files. Sometimes we had an index. The index was a string and a seek-number. We load the index file, found the matching string, and used the seek-number to find the record in the binary file. If we had this index, we called it a database.
What about the performance? Besides the algorithm of indexing, the database should also load the data from disk and uses the same OS-functions to do this. I assume, a "normal" database that is using a file to store the data needs more than one block read, and on the client-side? For a dataset with 100 fields you have to write 100 times: FieldValue := Query.FieldByName('FieldName').AsString. This is so awful slow... With one "BlockRead" I get 10kb in a record with 1000 Fields in a blink of a nano-second. (or less). Just one call!
Perhaps knowing all this I use a database nearly the same way as in the old days. The CRUD-Way. Just do Create, Read, Update, and Delete!
That's why I could migrate all my applications to a REST-Server in minutes.
That's why I could migrate all my applications to a REST-Server in minutes.
Yes I've used "Join" once or twice, and also a trigger or stored procedure, but just because somebody told me: "Let this do the database-server, the database server could do this better". In some cases this is absolutely right. Especially if you're dealing with really big datasets or/and your database is on a remote computer. That's for sure! Sending an update to a table with constraints is much easier as doing this with "Blockwrite" no question!
To have a session-based I/O while updating the customer-, the invoice- and the stock table in one call and if anything goes wrong just use rollback and not commit. Oh man that helps a lot.
In a few cases, I only read some fields of a row, but most of the time I need all fields. So, "Select *.." is the call. After I got all the data I need the mentioned field by field assignment.
That why I've programmed my JSONStore Client-Server Databasehandler in my Firemonkey Development Kit. I know the name is bad - you can uses this unit also in your VCL application!
That why I've programmed my JSONStore Client-Server Databasehandler in my Firemonkey Development Kit. I know the name is bad - you can uses this unit also in your VCL application!
Just select on which fields you want to have database access, all other fields are store in a blob field. Of course, I compressed the JSON before storing it. After loading the data set from the server (over REST) or from a local database you have to read your database fields the normal way and after that just let the RTTI do their JSONToObject thing. Done...
Hey compare this to the old style! A database with some keys and a blob field that could be read and write all the data in just one call to our Record/Class. We are back in the '80s well done!
One thing is different: In these days we have 5GHz, a 64Bit bit CPU, and most of the time 8 cores or more, and not 3 MHz, an 8 bit CPU, one core and only 64KB (not MB, not GB) of RAM.
One thing is different: In these days we have 5GHz, a 64Bit bit CPU, and most of the time 8 cores or more, and not 3 MHz, an 8 bit CPU, one core and only 64KB (not MB, not GB) of RAM.
But we are lucky, because with all that memory, cores, and CPU clock speed we can read our data from the database at the same time/speed as in the '80s...
I love DB's...
No comments:
Post a Comment