Monday, May 25, 2020

German CodeRage 2020

The German CodeRage will be live on the 2nd of Juli. I will present two sessions.

  1. 09:00 UTC Threads and Queues (11:00 MEST)
    How can I accelerate my application by using Queues and Threads to execute some workloads in the background? My session will show some examples of how to include this kind of asynchronous data processing in your app.
  2. 16:00 UTC SQLite in Threads (18:00 MEST)
    How can I use threads to access an SQLite-Database? Spoiler: you can use the technics from session one.
If you want to see my session live at the CodeRage event, you should register and you can also be part of the Q&A session.

If you prefer to watch the session in the English language, it will be online on my Youtube Channel at the same time!

So stay tuned and please subscribe to my channel perhaps you will find some teaser ;-)!





Monday, May 11, 2020

Delphi 10.4 Sydney - #Delphi104 - #ComingSoon

Hello, my friends!

Yes, I've got permission to blog about the upcoming new version of Delphi 10.4 Sydney!.

First I want to answer the most urgent question for all FMX-Mobile developers:

What about ARC?

It's gone, it's history, I hope I will never see any kind of ARC again!

Next on my list is Metal... No more OpenGL-(ES) on iOS.

It has a nice new feature: You can set your framerate, e.g. fix to 60FPS or only refresh the screen if something has changed. I haven't done any longterm tests with this setting, but I assume that this will expand your battery life.

With both - NO ARC and Metal my iOS app is flying! Not only by numbers, you really feel the new performance. 

The other thing: The new Managed Records - as Marco Cantu has already blogged about it. Please follow the link!

Why are these records so interesting? Well, I have a 34 years old huge app grown from TP to Delphi. In these App, we're using mostly records for everything. At the moment we can only use short-string. 

Why?

Imagine a record with an ansi- or unicode-string like:

TFoo = Record
  Name : Ansistring;
end;

Every procedure that want's to use a TFoo instance doing:

Procedure Bar(Var aFoo : TFoo);
var
  Buff : TFoo;
begin
  Buff := aFoo;
  aFoo.Name := 'Othername';
  //...
  aFoo := Buff;
end;

Because you only copy the reference of Name, changing aFoo.Name also changes Buff.Name. So the Buffer is not working at all!

With the new Records you have a copy method, where you can explicitly call Setlength to create a copy of the string.

The first time in history the migration from D2007 to 10.4 will saves a lot of work. We will see - still a long way.

BTW: Now it's a good time to renew your subscription. Please click on the banner below.

Happy coding with this #ComingSoon new Version of  #Delphi104!






Saturday, May 9, 2020

The database on network file problem!

Perhaps you're lucky and you're using a database server for your application. So any user is able to have full access to the server anytime. Or your application will not run on a network and you can easily use a simple file-based database like SQLite.

If not, welcome to the club of developers using no databases or any hacking tricks around the shared access problem.

There are some implementations out there in the wild with different approaches to overcome the problem.

You may ask: Why you don't just install a local database server?

Of course I could install a firebird server or the free MS-SQL Express server, but in these cases the PC must always be switched on, so that he is accessible in the network to play the server role. As always the easy solution is not possible.

Many of my clients have a really simple network using the "Fritzbox" as the router and up to tree PC connected to it. To be able to run our software on any of these PCs, without having to switch on the "Server-PC" each time, there is a NAS connected to the Fritzbox with all the data.

Shared access is handled by the file system. Yes, this is a working solution, no question! But as every application and also the stored data is always growing, there is a point where a real database would solve many problems.

Googleing for SQLite and Network you will find some implementation. uSQLiteServer, SQL4Sockets, SQLite ODBC Driver, or SQLiteDBMS. And you also find an easy protocol for handling these calls (TechFell).

Without going too deep into the research, everybody is using some kind of TCP/IP / Socket handler to restrict the access to the "database".

So why restrict to some cheap interface that can only handle the easy stuff?

Let's collect our needs: We want:
  • locking
  • an easy to use interface
  • threadsafe would be perfect
  • perhaps asynchrony access
  • some kind of remote procedure calls
  • perhaps some kind of caching? 

This should all be possible to write in a reasonable amount of time. The caching could be a challenge, but I would love to see a 64-Bit implementation of this, that is useable from a 32-Bit application, so the always empty 14 GB of spare memory could finally be filled with something useful.

I think I will start with a nice slim socket implementation, using UDP Broadcast to find other clients or "Server". Then connect over TCP, implement a simple low lever protocol for the handshake, ping, and version checking. Perhaps a plugin system that is able to auto-reload new versions from a server.

Yes this is all doable...

Why reinvent the wheel? My answer is as always: Because my wheels are running better. Or at least I think so... 

One big problem is still in my way: Find the time for doing this.

Perhaps you would like to see me live on YouTube trying this? Or my break down, because I have underestimated the problem... In any case, please leave a comment and subscribe to my channel.

I have to travel to a planet with a lower rotation speed!




  


Tuesday, May 5, 2020

Is a database just a data storage?

In the old days, and remember the main title of my blog: "from old school...", data would be saved in files. For the new kids on the block: A file is a storage on your hard disk, a hard disk was a device with spinning disks inside. A read/write head could store and read bytes to and from it.

So in these days we just write a block of bytes to these files, we used the "BlockWrite" command to do this! Why, because it is and was the fastest way to write a record binary to disk.

And NO, streams are not faster. Because down to the RTL, a Filestream is using the same functions, but need more calls to get there. Maybe you call "BlockRead" and "BlockWrite" the old style, but I don't care.

Before we got hard disks, we hat floppies. You got the best performance out of a floppy if you could provide a buffer to read the whole track in one rotation. If your CPU or your floppy controller was not fast enough, the sectors on the track had to be interlaced, and in this case you needed more than one rotation - too bad.

What was the title of this post?

Oh yes. We stored data, mainly records, in binary files. Sometimes we had an index. The index was a string and a seek-number. We load the index file, found the matching string, and used the seek-number to find the record in the binary file. If we had this index, we called it a database.

What about the performance? Besides the algorithm of indexing, the database should also load the data from disk and uses the same OS-functions to do this. I assume, a "normal" database that is using a file to store the data needs more than one block read, and on the client-side? For a dataset with 100 fields you have to write 100 times: FieldValue := Query.FieldByName('FieldName').AsString. This is so awful slow... With one "BlockRead" I get 10kb in a record with 1000 Fields in a blink of a nano-second. (or less). Just one call!

Perhaps knowing all this I use a database nearly the same way as in the old days. The CRUD-Way. Just do Create, Read, Update, and Delete!

That's why I could migrate all my applications to a REST-Server in minutes.

Yes I've used "Join" once or twice, and also a trigger or stored procedure, but just because somebody told me: "Let this do the database-server, the database server could do this better". In some cases this is absolutely right. Especially if you're dealing with really big datasets or/and your database is on a remote computer. That's for sure! Sending an update to a table with constraints is much easier as doing this with "Blockwrite" no question!

To have a session-based I/O while updating the customer-, the invoice- and the stock table in one call and if anything goes wrong just use rollback and not commit. Oh man that helps a lot.

In a few cases, I only read some fields of a row, but most of the time I need all fields. So, "Select *.." is the call. After I got all the data I need the mentioned field by field assignment.

That why I've programmed my JSONStore Client-Server Databasehandler in my Firemonkey Development Kit. I know the name is bad - you can uses this unit also in your VCL application!
Just select on which fields you want to have database access, all other fields are store in a blob field. Of course, I compressed the JSON before storing it. After loading the data set from the server (over REST) or from a local database you have to read your database fields the normal way and after that just let the RTTI do their JSONToObject thing. Done...

Hey compare this to the old style! A database with some keys and a blob field that could be read and write all the data in just one call to our Record/Class. We are back in the '80s well done!

One thing is different: In these days we have 5GHz, a 64Bit bit CPU, and most of the time 8 cores or more, and not 3 MHz, an 8 bit CPU, one core and only 64KB (not MB, not GB) of RAM.

But we are lucky, because with all that memory, cores, and CPU clock speed we can read our data from the database at the same time/speed as in the '80s...

I love DB's...






Thursday, April 23, 2020

Live event : The Apocalypse Coding Group.

Don't miss Part 7 & 8 of the Apocalypse Coding Group on 25. & 26.04. 14:00 UTC.

With:
Andrea Magni
Craig Chapman
Glenn Dufke
Ian Barker
Jim McKeeth
an me...

Where MVP's are trying to convince the viewers that they are worthy of this title, although it did not look like this in the Live-Stream 1-6 with 4 hours each.
It's fun and you can annoy us in the live chat!

Part-7 : https://youtu.be/eJL_kp92N1Q
Part-8 : https://youtu.be/oh48IoNi9OI

The Live-Stream is on Craig Chapman channel! Please don't forget to subscribe to his and my channel so you don't miss the upcoming events, we are currently planning together!

Legacy Applications

OK - here is the problem:

Old Application started with TP 3, grown to many Mio. LOC.
Full of Moves, Records, and other stuff that is not ready to move from pre Unicode to Unicode.

That means no real RTTI, Generics, FireDac, native HTTP, ITask, and other stuff you love if you are using 10.x. 

With a look at the roadmap ;-) the new records would help, but anyway it's a huge task to get it running with XE.

The first idea was to use a DLL - of course -  this works, but not really, because of no real working share mem and an FMX-DLL has also problems. 

So you have to serialize everything over to and back from the DLL. If you have to serialize everything you could do this also over TCP. 

The multi-user network sharing is working but should have some improvements.

A local Database would also be a good idea and take out the old Enz-ISAM that I've ported to windows a long time ago.

Installing a real DB-Server is not possible. Any options?

I could install a Service on each workstation in the network.

What can a Service do for you?
First of all - no problems with admin rights anymore. Installation of anything else is a piece of cake.
The Service Apps of each workstation could talk over a TCP connection to each other. So without a dedicated Server, the Service-Apps could name one as "The Server" if the workstation is going to do a shutdown, another workstation could be named as "The Server" from now on.
Every running App could ask the local server on 127.0.0.1 - "Hey give me the IP of the Server". No need for configs. instead of 4GB the Server could use all the memory and load DLL for different tasks. 

And the client-side?
The client can use a simple interface for the Service, like OpenDB, LockTable, WriteData, UnlockTable, and CloseDB (CRUD with locking) - Every command must (again) be serialized over TCP. The Server could maintain the locking for each table. Vola - a dedicated SQLite Server, or any other DB. (And of course many more).
Internet-Updates, DBCache, and any other service that is much easier to write in XE than in D2007.

This is the Idea...

Do you want to see me struggling to implement this? Perhaps on a live Youtube-Session?
Or are your more into FMX and MVVM?

Anyway - please subscribe to my YouTube Channel and leave a comment on what you want to see next?

Have a nice separation...