General
Last Updated: 09/14/2003

Frequently Asked Questions for General:

  1. Why kbmMW
  2. Authorization Failed
  3. AutoSessionName
  4. Blocking or non blocking sockets
  5. Briefcase model
  6. Briefcase model, offline working and synchronizing
  7. Broadcast data changes to clients
  8. Converting from C/S to 3 tier
  9. Data compression
  10. Data encryption
  11. Demo program, where is it
  12. Developing
  13. Error: There must be at least one field
  14. IBX
  15. Indexes on server
  16. Information about a client's connection
  17. Inserting several records
  18. Invalid use of token
  19. Load balancing and fail over techniques
  20. Load balancing running multiple instances of MW server
  21. Many Query components in datamodule
  22. Multi CPU
  23. MWClientTransactionResolver hang when resolving
  24. Opitmising FieldDef collection
  25. Other transports
  26. Performance and ease of use
  27. Processing my own functions
  28. Query server getting out of sync with my Interbase server
  29. Query service KBMMW_QUERY already registered
  30. Record locking
  31. RegisterServiceByName MaxCount
  32. Resolving data from a custom client SQL query back to a backend database server
  33. Send a memorytable from the server to a client
  34. Sending / receiving messages and streams
  35. Send multiple memtables to a client
  36. Sockets, Indy versus Delphi
  37. SQL commands
  38. SQL commands, are they needed
  39. Stored prcedure and trigger methodology
  40. Third party vendor additions
  41. Unsupported datasets
  42. Updating only changed fields
  43. Versioning does not work

[Back to Main]














Why kbmMW
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

kbmMW has been designed from ground up to be _seriously_ extendable:
  • Don’t you like using Indy for transports, plug in another transport method Like ICS or other.
  • Do you need to move datasets via email, ICQ, or any other way of Communication, write an extension and plug it in.
  • kbmMW supports datasets the same way as Midas, except instead of using TClientDataset (which is a memory hog and quite slow), it uses kbmMemTable which is very lean in memory terms and quite fast too. With that come also the benefit of all the nice features of kbmMemTable.
  • Do you need to work on other databases, write an extension and plug it in if its not already supported.
  • Are the databases not really databases? Eg. you want to access mail folders, permissions, messages and more, write a database extension to access the mail system, and use it in the clients as if the mail system is nothing but a standard database.
  • No runtime fees.
  • Very reasonable commercial purchase fee!
  • Full source included.
  • Possibility for free licenses for noncommercial applications. This must be approved case by case.
  • Build in server objects support. Do you want to publish your server objects via web services, write a transport extension, and it will be web service aware (this is a planned feature for a later release). Then the server object can be access simultaneously from several sources at the same time.
  • Do you want connection pooling towards backend data sources like database and servers? Do you want client side and server side caching of request results for very speedy response and of loading of backend databases?
  • Thousands of users which already knows how to use kbmMemTable, and thus perhaps can assist in dataset related questions.
  • One of the better support services.... according to other people that is.


[Return to Top]


Authorization Failed
Kim Freeborn kfreeborn@closertohome.com    
28/08/2002

Add folowing:

Perm := [mwapRead,mwapWrite,mwapDelete,mwapExecute]; on onauthenticate even instead of OnAuthenticateQuery.

[Return to Top]


AutoSessionName
Kim Madsen kbm@optical.dk    
21/10/2002

kbmMWPooledSession.AutoSessionName. Everytime I put the AutoSessionName to true, and starts the server up and have a client connects to it, the Server throws an exception 'Session name must be filled' (or something similar), and the client throws an exception 'At least one field must present' (or something similar). If I put the AutoSessionName back to false, then everything is OK.

I noticed that when the AutoSessionName is set to true, the ThreadSessionName is put to xxx_1. However, the kbmMWClientQuery1 in the Service DataModule is still set to xxx. How do I go about doing this AutoSession thingy?

Currently, in my MIDAS app, in the RDM, i have a TSession set to AutoSessionName, and the TDatabase and TTables all connect to the Session component (not the Session.SessionName). Thus, I suppose, all the session naming is handled automatically.

How do I make it work? Do I really need this AutoSessionName thing or (if not) then why does kbmMWPooledSession have this property?


Generally _dont_ set it to true. It should be false for most types of setups. Only in very specific threaded situations you would set it to true.

The subject has been discussed on the C4D newsgroups a couple of times. Dont mistake it for the TSession component. They have nothing in common. The TkbmMWPooledSession usually only acts as glue between queries and a connectionpool (just like a TDatasource works as glue between a TDataset and dataaware controls).

[Return to Top]


Blocking or non blocking sockets
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

Is kbmMW based on Blocking or non-blocking ?

Actually kbmMW is not depending on either. Its up to the communication layer underneath. The currently supported comms layer - Indy - is blocking. But I would say you will never notice if it’s blocking or not in current Windows/Linux implementations.... kbmMW is not yet on Linux... but :)

[Return to Top]


Briefcase model
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

Is there already, or will you be looking at incorporating any replication capability into your n-tier components. I am currently looking at expanding several applications to include briefcase model replication; none of the products are released yet, even in their standard form.

If I can purchase this functionality and put it into my applications without having to force my users to purchase a "server" (that also costs over 10 times as much at the application!) Then I would like to do this and obviously save myself a considerable amount of time.


You have the ability to replicate data from server to client and back again if you so want.

Internally it uses kbmMemTable (a pretty known in memory table) for dataset storage and handling. It also allows you to save and load the data to an external media like a CSV or a binary file for a briefcase model.

kbmMW is an open architechture. Thus special purpose database providers can be made for kbmMW where you can handle datasets, records etc. without having any 'real' database behind it simply by using its briefcase features.

[Return to Top]


Briefcase model, offline working and synchronizing
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

Does kbmMW have support for "Briefcase" model, where the user can work with data off-line and then submit back to the server

Yes. You can use the query.SaveToFileViaFormat(aFilename:string; aFormat:TkbmCustomStreamFormat). Put a memtable stream format which supports deltas (currently the TkbmMWClientBriefCaseBinaryStreamFormat) on the form. In 0.91a you are able to set the DefaultStreamFormat property of the query, and thus only need to write query.SaveToFile(aFilename:string). Later load the file using either query.LoadFromFileViaStream(aFilename:string; aFormat:TkbmCustomStreamFormat) or in 0.91a query.LoadFromFile(AFileName:string) after setting the DefaultFormat.

[Return to Top]


Broadcast data changes to clients
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

If data changes at the table side, I need to have the ability to broadcast certain coded messages to certain clients (more like triggers) and then the client upon getting those messages can update themselves from database

OK, the way I would do that is to add a TkbmMWServer to the clients, and then register a custom trigger service inherited from TkbmMWCustomService to do the client side updating. Then the 'real' server will act as a client towards the 'real' clients needing to be updated. This also gives you the posibilities of clients contacting other clients (if allowed) in a peer-to-peer network.

[Return to Top]


Converting from C/S to 3 tier
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

If an application written in FF2 as a std C/S application, how difficult or procedures would be to convert it to three-tier with kbmMW ?

Its generally relatively simple to convert a C/S to a 3 tier solution. What’s needed is to find all your FF2 access components in your client app, and move those on to some TkbmMWQueryService modules.

Then replace your FF2 access components in the client with TkbmMWClientQuery components and put one TkbmMWClientConnectionPool and one TkbmMWPooledSession on any form/datamodule. Connect the pooled session to the connection pool, specify a sessionname, and make sure to specify the same sessionname on all your query components.

For more advanced use, there can be more than one pooledsession component (with a different sessionname) and optionally more than one connectionpool (but usually not needed). Having more pooledsession components are only interesting (but not required) in a multithreaded client where one want to make sure that the queries must be performed in a specific order and not first come first serve.

Then add a TkbmMWTCPIPIndyClientTransport and setup the host address and port number to match the server. This is relatively easy to do. On the server side, you need to register services the clients can use. This is done by the RegisterService or RegisterService ByName methods. Each service have its own datamodule _inherited_ (not copied) from the TkbmMWCustomService (File, New, kbmMW Service objects) for non dataset oriented services or the TkbmMWQueryService the same place for dataset oriented services.

If you inherit from a QueryService, you can then put data components like f.ex. TkbmMWFF2Query and TkbmMWFF2Resolver or other components of your
likings. Just make sure to set the Query property of the inherited TkbmMWQueryService datamodule to point on a TkbmMWCustomPooledQuery descendant. Check the demo server project. This gives you automatic connectionpooling and caching abilities towards the backend database.

Another possibility is to inherit from the TkbmMWCustomQueryService and override the PerformQuery, PerformFieldDefs and optionally PerformResolve if you want to create a service without using the kbmMW connection pooling components.

[Return to Top]


Data compression
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

Does kbmMW have built in support for compression when data is traveling between client and server

Yes in v. 0.91a compression was introduced. Additional streaming formats can be created by extending either TkbmMWStandardRequestTransportStream or TkbmMWCustomRequestTransportStream, and TkbmMWStandardResponseTransportStream or TkbmMWCustomResponseTransportStream, and then register them with the global method:

kbmMWRegisterTransportStream(SomeRequestTransportStreamClass,SomeResponseTransportStreamClass).

The file kbmMWZipStdTransStream.pas builds on the standard transport stream format and adds zip compression to it. On the TkbmMWxxxTransport components you'll notice a property named StreamFormat. This is used to choose between the different transport streamformats registered for kbmMW. 0.91a has buildin support for STANDARD and ZIPPED.

[Return to Top]


Data encryption
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

Does kbmMW have built in support for encryption for security purpose

Yes. A component class named: TkbmMWCustomCrypt can be inherited from to create specialized en/decryption features. A TkbmMWEventCrypt component is included which published two events in which the en/decryption code can be added.

[Return to Top]


Demo program, where is it
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

The demo client and server is part of the kbmMW package including source code. After installation, you'll find it in the kbmMW/demo/client and kbmMW/demo/server folders.

[Return to Top]


Developing
Kim Madsen kbm@optical.dk    
07/11/2002

> We are searching a good solution that should have somehow these features:
  * Support a Database
  * Support more Developers
  * Support real Version Controlling
  * Support Bug-Tracking _with_ some connection to internet (?)
  * Support a Client Database where you add all your information about your clients
  * Support managing of different documents (like helpfiles or images)


One thing is which IDE you choose and what options it has. Another is how you split up a project in some nicely sized bites which the developers can program. Too small bites and the developers will not think its fun... too large and the developers might be stepping on each others toes and have serious difficulties to test it thoroughly..

I would definitely suggest you to check out kbmMW which is a n-tier product incoorporating a system thats called services.

How to prepare for simultanious developers on one project:

1) Decide how many different clients it will contain... for example web clients, thin clients (client side applications) etc.

2) Decide the overall logical split of your project. Can it be split departmentwise? or functionality wise?

3) Start splitting each of the logical project units into functional units or services as we refer to them in kbmMW. Filetransfer = one service, user authentication = 2nd service, 1st deparmental database functionality = 3rd service, 2nd departmental database functionality = 4th service, overall monitoring = 5th service, overall application administration = 6th service, misc functionality like exchange rates, conversions etc in each their service.

4) Remember to _remove_ business logic from your client side regardless if client is web based or not. The client should _output_ data and allow for the ser to _input_ data, but should generally _not_ process data unless good reason is given. Move the business logic to the server side in a service and it will be available for all client types to use without recoding. If company business rules change, you often only need to change some code in a service on the server and not have to replace all clients.

5) Let each developer test each their own service, and optionally create needed client components matching special service functionality for easing the job for GUI developers.

6) Register all services to the kbmMW application server and run.

Using that method with kbmMW you will very fast see results. Your developers will feel that they have responsibility for specific parts of their project, and that they can test what they are doing, even without having the code from all the other developers. This gives developer satisfaction which in turn gives better productivity. Splitting the app development wisely into services also gives you the opportunity to add more features later on without affecting the original structural design of your application. You can even easily add new developers adhoc for doing new services as needed.

kbmMW supports more than 16 different databases directly also including ADO and dbExpress supported databases.

[Return to Top]


Error: There must be at least one field
Kim Madsen kbm@optical.dk    
21/09/2002

I made a server and client and the client gives me the "There must be at least one field" error in the InternalOpen method.

- Open your server in the IDE and run it.
- Open your client externally and run it.
- Try to provoke the error again.
- Now in the IDE you will get to see precisely what the error is about. Most probably its a database connection problem.

[Return to Top]


IBX
Kim Madsen kbm@components4developers.com    
08/09/2002

Cannot compile kbmMW 0.93, D5 with KBMMW_IBX5_SUPPORT

Make sure you are using the latest IBX version for Delphi 5. Latest is v. 5.03 which can be downloaded from: http://codecentral.borland.com/codecentral/ccweb.exe/listing?id=17556

[Return to Top]


Indexes on server
Kim Madsen kbm@optical.dk    
21/10/2002

Why doesn't kbmMW gives something like a 'Key Violation' when one of the keyfieldsname duplicates (or the an IndexName with a ixUnique) duplicates an already existing keyfield?

Let's say my kbmMWBDEQuery:
sql = 'select UserID,
UserName from users' tablename = users
keyfieldname = UserId

then when I open the query in the client, and make a new record, having a UserID that is same with another record, the kbmMW is fine with that. There are no errors, etc.

I tried to do a IndexName=IUserId, with the ixUnique on, but the result is still the same. Do I need to do a manual check for duplicate keys myself? How do I go about doing it? Also, the returned dataset doesn't seemed to be sorted as well. I tried to assign SortFields to UserID, but it doesn't sort. How do I sort the dataset to be return to the client?


The reason is that the client doesnt know anything about the indexes on the server. Thus you will not get an exception until you resolve the changes back to the server. On the client you can use whatever sorting you would like to f.ex. via SortOn etc methods.

[Return to Top]


Information about a client's connection
YG Lim lim_yg@hotmail.com    
24/04/2003


If you like to know the number of users connected to the middle tier r their identity and you are using Indy as transport, you can do in as follows:

1. Iterating through all the connections threads maintained by Indy as follows:

  var
   Lst : Tlist;
   i   : Integer;
  
  begin
   Lst := YourkbmServer.IdTCPServer.Threads.Locklist;
   try
    for i := 0 to Lst.Count-1 do 
    begin
     Edit1.Text := TidPeerThread(Lst[i]).Connection.Socket.Binding.PeerIP;
     Edit2.Text := IntToStr(TidPeerThread(Lst[i]).Connection.Socket.Binding.PeerPort);
    end; 
   finally
    Transport.IdTCPServer.Threads.UnlockList;
   end;



2. Another method would be to get the thread ID whenever the user connects to the server by assigning the following methods to OnConnect event of the server

The thread ID you can get with: TkbmMWServerTransportInfo(self.RequestTransportStream.Info).Client

If you want to get the clients IP address it should be enough with: ClientIdent.RemoteLocation. This gives the remote client IP address and Port number. If u just need the port then use the following function to extract the IP address only

   procedure GetIPPort(var Host: string; var Port: integer);
   var
    P: PChar;
   begin
    P := @Host[1];
    while not (P^ in [':', #0]) do
       Inc(P);
    if P^ = #0 then
       raise Exception.Create('GetIPPort: Invalid format, must be: ''Host:Port''');
    Port := StrToInt(P + 1);
    P^   := #0;
   end;



 Then assign the values obtained to a TlistObject or memory tables. Since u now have the thread ID, whenever the client disconnects, from the OnDisconnect event, get the thread ID and use the thread id to remove the row or item. You may want to wrap the insertion and deletion of the rows/items in a critical section.

[Return to Top]


Inserting several records
Kim Madsen kbm@components4developers.com    
02/10/2002

How do I insert several records into a database under transactional control on a custom service on the server

As an example using ADO Express:

You will use the TkbmMWADOXConnectionPool on the server to obtain a ADO connection, then start transaction on that connection, insert the records using standard TADOQuery or similar component and finally commit the transaction. When the connection is of no further use, release it again.

var
   con:TkbmMWADOXConnection;
begin
   // Get connection from session manager.
   con:=ADOXConnectionPool.GetBestConnection;
   if con=nil then 
      raise exception.Create('No connection.');

   // Try to lock connection for our use.
   if not con.LockConnection(-1,-1) then
      raise exception.Create('Not able to lock connection.');
   try
      ADOQuery1.Connection:=con.Database;
      con.Database.StartTransaction;
      try
         ADOQuery1.SQL.Add('insert into tblcustomers( ....');
         ADOQuery1.ExecSQL;
         ADOQuery1.SQL.Add('insert into tblcustomers( ....');
         ADOQuery1.ExecSQL;
         con.Database.Commit;
      except
         con.Database.Rollback;
      end;
   finally
      con.UnlockConnection;
   end;
end;



[Return to Top]


Invalid use of token
Kim Madsen kbm@optical.dk    
21/10/2002

My application is dBase-based files. Some of the fields like 'ACTIVE', 'DESC', etc are okay to the BDE. However, since the kbmMW is SQL-based, everytime an Update/Delete/Insert SQL is generated, during the Query.Prepare stages, the exception is thrown 'Invalid use of token'. Is there a workaround or do i need to modify all my fields to become like: ACTIVE->FACTIVE, DESC->FDESC, ...

Perhaps set the QuoteAllFieldNames property of the resolver to true. It will put " around all fieldnames.

[Return to Top]


Load balancing and fail over techniques
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

In a later alpha release, load balancing and fail over techniques will be part of kbmMW. Its already possible to do that today using kbmMW, but it requires some manual labor.

One way is to create a kbmMW based server which only job is to tell clients which other server they should be connecting to. Thus the client first connects to the ‘master distributor’, which returns an IP address and port number (for non TCPIP based communication... some other connection parameter to identify the server to connect to) to the client. Then the client disconnects, and reconnects to the new address obtained.

[Return to Top]


Load balancing running multiple instances of MW server
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

So basically each copy of kbmMW is like a client to database server, which means I can have multiple copies of kbmMW running on several machines talking to one database server and each kbmMW can have many of it's own clients. Right so far?

Then at runtime how can I split the clients load among al these kbmMWs?


Yep, each kbmMW server is acting like an advanced multithreaded, multi-connection client to the backend server, and yep you can have multiple kbmMW servers running against the same backend database. kbmMW will at a later stage in the Alpha release contain automatic fail over and load balancing features. But yes... you can already at this time split the load between several kbmMW servers by writing a little code yourself.

The easiest way would probably be to create one kbmMW server which only contains a service which maintains a list of other kbmMW servers. Then clients would contact this one 'masterserver' and ask for an IP address for one of the other servers, after which the client disconnects and connects to the other server.

You can also have connection to both the 'masterserver' and the database kbmMW server at the same time by simply leaving the masterserver client transport, and then add another TkbmMWxxxClientTransport setting its Host address to point at the kbmMW database server.

[Return to Top]


Many Query components in datamodule
Kim Madsen kbm@optical.dk    
21/10/2002

I noticed that even when I put more than 1 Query component inthe TkbmMWCustomService DataModule, I can only access one query component at a time (via the DataModule's Query property). Does this implies even if I have more than I query component in the DataModule, I can only access 1 at a time?

The situation is this: I have 21 tables that I want to *share*. In my current MIDAS app, I have 21 TTables (along with 21 data set providers) and they are all put under the TRemoteDataModule. This is basically a 3-tier system, using remote dial-in.

Basically when my client (which is limited to 5 only) dials in and connects to the server, after logging in, it will fetch all the 21 tables. There is basically a progressbar that says something like 'Logging in... Retrieving server data..., etc'

How do I implement this in kbmMW? Do I need to create 21 Query Services to individually put a kbmMWBDEQuery in them? Will this cause problems (ie insufficient memory, etc) when 5 clients connect to the kbmMWServer simultaneously, ie 5x21 = 105 Query Services? I am not using any database servers, just the normal dbase files and ttables.

How do you recommend this.


Nope you dont have to do like that.

The Query property of the service is used when a client is requesting the _default_ query component. The default query is requested when the client do not specify which published query component on the server to use.

The client can specify to use a specific published query component by using the named query syntax:

eg. clientquery.Query:='@NAMEDQUERY@select * from xys';

It will look for the query named NAMEDQUERY on the server. If its existing and published (and the client is allowed to use it) the given select statement will run on it.

Generally I would recommend _not_ putting sql statements inside the client, but instead define the needed statements on the server and use parameters for where clauses. The client gets to know about any parameters the server has specified for a query.

[Return to Top]


Multi CPU
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

Does kbmMW server component take advantage of “Multi CPU” when it’s threading it’s job? Another words, for large number of users, by installing a 2-CPU or 4-CPU, would kbmMW use each processor

Yes it does. Each active service instance in kbmMW is running in its own thread, and is thus subject to the OS selecting which CPU to run on. 0.91c added capability to set a CPU Affinity mask where you can choose which CPU’s kbmMW may use.

[Return to Top]


MWClientTransactionResolver hang when resolving
Jeff Butterworth jefbut@drs.com.au    
14 sep 2003


You must have a database connection available for each dataset. If you are resolving three datasets, you need three connections available.

[Return to Top]


Opitmising FieldDef collection
Richard J. Gillingham richard@swedgedev.co.uk    
20/04/2003

I have all my kbmMW client datasets on data modules to ease their management. The data modules present a more rigorously typed interface for the other layers to call into. Along the way I noticed quite a lot of chatter to the app server at various times and it's taken a little while to realise why.

If you have say 50 client datasets on one data module, when you create that data module the .Loaded method of each of the client dataset components gets called too. In here the framework does various operations to initilise the internal state of the datasets. One of these if collecting the FieldDefs and StoredProc params from the App Server. This happens for every client dataset regardless of how many of them are subsequently used.

Here's the tip:
To get around this set AutoFieldDefsOnOpen property of each dataset to mwafoNever and before you actually use the dataset call FetchDefinitions on it. There are downsides mind as they won't be collected at designtime either although the flag can be toggled in that instance.

[Return to Top]


Other transports
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

What would it take to have a client that is on PDA to talk to kbmMW server running on a Windows Server via wireless technology

Actually anything can be client to a kbmMW based server. All what’s needed is a transport that the client can use... e.g. TCPIP, and a transport streaming format that some code on the PDA can understand. The TCPIP transport is already there in the TkbmMWTCPIPIndyServerTransport. A new transport-streaming format is probably needed to make it easy for the client to access the services in kbmMW.

That’s about it...

[Return to Top]


Performance and ease of use
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

I cannot give you numbers showing performance, but kbmMW is pretty speedy. I’ve heard from several comparing speed between others and kbmMW that kbmMW is fast, and that even if kbmMW is not really optimized yet. kbmMW will later handle fetch by demand of records and fields which it does not now. This will make the perceived speed even faster.

kbmMW is also designed to be easy to use. For distributing databases in 3-tier, all you do is add a special datamodule (File/New/kbmMW Business Objects/TkbmMWQueryService) put a few queries on it and set the published property to true on the ones you want clients to see (queries can include parameters which then will be known to the client), add a revolver component and that’s basically that. On your main server project form you’ll need to add a connection pool, some database templates depending on the type of database behind it, and then you can start your server.

On the client it can be very much like doing 2-tier development if you choose to do so. You just add some TkbmMWClientQuery components, set their Query property and add a transport component and a connection pool, and then you are running.

When you get the grip of the relatively few components needed, it should be very easy to use.

[Return to Top]


Processing my own functions
Jeff Butterworth jefbut@drs.com.au    
14 sep 2003

In a QueryService how can I also process my own functions? If I add the ProcessRequests function, the queries no longer work.

Add the ProcessRequests function to the query service as follows:
function TMyService.ProcessRequest(const Func:string;const
ClientIdent:TkbmMWClientIdentity;const Args:array of Variant):variant;
var
  strFunc : string;
begin
  strFunc := Uppercase(Func);
  if strFunc = 'MYFUNCTION' then
    Result := myfunction
  else
    Result := inherited ProcessRequest(Func,ClientIdent,Args);
end;



[Return to Top]


Query server getting out of sync with my Interbase server
smalltractorboy machinery@mustacheboys.com    
10/08/2002

I am having a problem with my query server getting out of sync with my Interbase server. Everything is fine if all modifications to the database go through my middleware server. However, if I make changes to my tables using a different tool, the middleware server never sees the changes until I shut it down and bring it back up! Is anyone else experiencing this problem? BTW, I am using the Firebird version of Interbase as my database server.

I had assumed that the default isolation level for TIBTransaction was "read committed;" however, it is not. For those of you who were stumped by this problem at one time or another, "read committed" transaction isolation can be set by either double clicking the TIBTransaction control, or entering the following strings in the TIBTransaction "Params" property:

"read_committed"
"rec_version"
"nowait"

If you do not set these parameters when using IBX, Interbase will use the default transaction parameter buffer (TPB) that contains binary equivalent (the strings are converted to their binary equivalent by the IBX code) of the following strings:

"write"
"concurrency"
"wait"

The killer parameter here is "concurrency" which basically ensures that one cannot access committed changes to tables made by other simultaneous transactions.

[Return to Top]


Query service KBMMW_QUERY already registered
Kim Madsen kbm@optical.dk    
21/10/2002

Also, there is an instance when the server runs and an exception comes out saying 'Query service KBMMW_QUERY already registered'. I tried to do a UnregisterService('KBMMW_QUERY') but the exception still comes out. This happens during the whitepaper on query server and I had to rename the KBMMW_QUERY to KBMMW_QUERY1 to make the server run. Although this is ok, but how do I *force* unregister this KBMMW_QUERY service?

Of some reason you have two registerservice calls registering the same service. If you use RegisterService it will register with the _default_ name of the service, ie the name that the service has been programmed to use as default via the GetPrefServiceName method. If you use RegisterServiceByName you can specify to register a service under a different name than the default.

This you would need to do if you for example have 2 or more queryservices in the same application server.

[Return to Top]


Record locking
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

With FF2 when a client is working on a record and someone else ATTEMPTS to change it, it tells them the record is locked, BEFORE the spend the time changing a record, submitting and then find out the record is locked. I played around with MIDAS and I didn't like how it worked. Could you please explain to me how record locking works with kbmMW ?

Usually n-tier setups are designed to be stateless. The reason is that it allows multiple users to share the same few precious resources of the servers, like connections, caches etc.

2-tier setups are most often of the type... one client... at least one connection. Thus 1000 clients -> at least 1000 connections. Having a connection for each client (and thus also posibility for a live cursor on the database side) gives the benefit of being able to react on database related errors at record or even field level on the fly.

Stateless setups are not very good at on the fly database related errors, but are instead much better in serving a large load.

kbmMW is a stateless approach, although it will contain possibilities for statefull connections at a later stage (some of the code to support that is allready in place in kbmMW).

kbmMW works in the way that you f.ex. request some records from the server, the records are requested from the backend database and send to the client, after which the cursor holding the original request from the backend database is released and thus lost. The client recieve the records as a detached dataset, and can modify and change as much as desired.

To have the changes reflected back on the backend database, the client need to _resolve_ the modifications back. Issuing a resolve generates a delta dataset which is send to the server and processed there by doing the actual modifications back to the backend database server.

Some of these updates on the database may go good and some may go bad. KbmMW maintains a list of all the updates that failed, and sends that list back to the client where the client is given a chance to rectify the troubles and issue another resolve.

This is most likely much the same as the Midas you didn’t like too much :)

_But_ there are other possibilities:

- You can have a flag (set on the server side) which indicates if any other user currently use the record.
- You can decide not to include records, which other users are using.
- You can decide to include all records, and in the BeforeEdit/BeforeInsert/BeforeDelete etc. try to modify one field of the record (giving it the same value it had), and resolve that back to the server. This will give you an error if it didn’t succeed, and thus an indication of any users locking it.
- You can create a server side service which you call in the BeforeEdit/BeforeInsert event including the key values and returning a Boolean value telling if the record can be updated or not.

These options are not a guaranty that no other user will be able to pinch the record and lock it in-between the time the query is made and the actual resolving is made back.

There are probably other alternatives too.... and I will have it in mind if I get a good idea of how to solve it in an elegant and flexible way.

[Return to Top]


RegisterServiceByName MaxCount
Kim Madsen kbm@optical.dk    
21/10/2002

RegisterServiceByName('KBMMW_QUERY',TMyQueryService,true,true,-1,0)

It says that by setting MaxCount to -1, there are no limits to the number of running instances of this service. Does this implies that if I set it to 1 or X, then only 1 or X number of clients can connect to it at any one time (until the occupying client disconnects, etc)?

I tried to put MaxCount=1 but still a whole lot of clients can still connect to the service. How do I limit the number of clients?


MaxCount defines how many simultanious instances of the given service are allowed to run. This also limits how many client requests can be run at the
same time on that specific service. It does however not control how many clients are allowed to connect.

If you have set MaxCount to 5 and 10 clients are requesting that service at the same time, the 5 will run and the remaining 5 will be queued and run when there is room.

[Return to Top]


Resolving data from a custom client SQL query back to a backend database server
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

How do I resolve changes with the IBX components with client side SQL ?

This explains how to resolve data from a custom client SQL query back to a backend database server:

You need to setup the following on the serverside TkbmMWQueryService:
- Set AllowClientStatement to true. This allows the client to send client side SQL statements.
- Set AllowClientKeyfields to true. This allows for the client to specify which fields are unique key fields.
- Set AllowClientTableName to true. This allows the client to specify which actual table resolving should be done at. It should usually be the same as the table included in the statement.
- Set Query to point at your main TkbmMWIBX5PooledQuery component.
- Make sure to set the SessionName of the query component.
- Add a TkbmMWIBX5Resolver
- Set the TkbmMWIBX5PooledQuery.Resolver to point at the TkbmMWIBX5Resolver.
- Set TransportStream to point at a TkbmMWBinaryStreamFormat. This component is used for moving the dataset data/deltas between client and server.

On the clientside:
- Set the TkbmMWClientQuery.EnableVersioning to true.
- Set the TkbmMWClientQuery.Query to contain your SQL.
- Set the TkbmMWClientQuery.KeyFields to a list of fields, which can be used to find a record uniquely. The fields should usually not be part of data modified from the client. Its usually an ID field or similar. Several fieldnames can be specified with ; between them.
- Set the TkbmMWClientQuery.TableName to the name of the table to resolve back to.
- Set the TkbmMWClientQuery.TransportStream to a TkbmMWBinaryStreamFormat put on the client.
- Call the Resolve method of the TkbmMWClientQuery to actually start the resolving back to the server database.

KbmMW also supports having multiple predefined queries with parameters on the server which the client can select between when executing the query. To do that, the query on the server needs to be Published by setting its publish property to true. The client can then set the query to @somename where somename is the name of the server side query component. Check out the server and client demo.

Generally I would recommend to limit use of client side SQL to only the cases where its really needed.

The reason is that it makes it much easier to create new versions of business logic without having to update the clients too, thus just good design practice. Performance wise and featurewise, client side SQL and server side SQL doesn’t differ much.

[Return to Top]


Send a memorytable from the server to a client
YG Lim lim_yg@hotmail.com    
24/04/2003


1. Create a custom service eg MyCustomService and a function called PerformGetData

2. Place a Memtable(kbmMemtable) and kbmBinaryStreamFormat on the Custom Service created in step one. Make sure that kbmBinaryStreamFormat is set to load/save table definitions

3. fill up the memtable eg as follows:

       On the server I have:

       function TMyCustomService.PerformGetData
       (ClientIdent:TkbmMWClientIdentity; const Args:array of
        Variant):Variant;
       begin
       // Enter code here to perform function GetData

        kbmMemTable1.FieldDefs.Add('Name',ftString,30,false);
        kbmMemTable1.FieldDefs.Add('Seq', ftInteger,0, false);
        kbmMemTable1.createTable;
        kbmMemTable1.Open;

        kbmMemTable1.InsertRecord(['xyz', 1]);
        kbmMemTable1.InsertRecord(['abc', 2]);
        kbmMemTable1.AllDataFormat := kbmBinaryStreamFormat1;
        Result:= kbmMemTable1.allData;
       end;



4. Now that u have send the memtable as a variant to the client.
The steps at the client is as follows:
       Place a memtable eg mt
       Place kbmBinaryStreamFormatInsert eg Binary

      var
       v : Variant ;
      begin
       v := dmConnection.Client1.SendRequest('MyCustomService','1.00.00','PERFORMGETDATA', []) ;
       mt.AllDataFormat := Binary;
       mt.AllData := v;
      end



Just make sure that binary stream format is set to load/save the table definition. And the above statements are coded by hand. Just assigning the Default Format and Form Format of the memory table within the Delphi Object Inspector to Binary will not work

[Return to Top]


Sending / receiving messages and streams
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

Does kbmMW have capability to send / receive messages and streams

Yes, you can send streams, values, datasets, exceptions and more. Any request or response holds a set of values, some status information (exception info), and a stream.

[Return to Top]


Send multiple memtables to a client
YG Lim lim_yg@hotmail.com    
24/04/2003

How to send multiple memtables from the server to a client within a procedure

You can pass multiple kbmMTs into a service using
Args:=VarArrayOf([kbmMT1.AllData, kbmMT2.AllData,...])


Likewise to return multiple tables from the service use
Result:=VarArrayOf([kbmMTa.AllData, kbmMTb.AllData,...])


On the receiving end of this you use
kbmMTx.AllData:=Result[1]
kbmMTy.AllData:=Result[2]


each instance of kbmMT must have a matching stream format component assigned to the AllDataFormat property.

[Return to Top]


Sockets, Indy versus Delphi
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

why didn't you go with Delphi's socket and rather with Indy ?

I did indeed start on using Delphi's original socket components. The problem is that they are rather buggy and ... difficult ... to use in a multithread setup. There is a reason why Indy was chosen by Borland to be the next standard networking library for their dev. tools. Thus since Indy was going to be the future standard, I didn’t want to put my money on a dead horse :)

But kbmMW is designed to be an extendable design, thus other methods of transportation (f.ex. the std. Delphi socket components or DXSock) could be implemented by inheriting from TkbmMWCustomTransport. More transport methods than TCPIP Indy will most likely show up in the kbmMW package, f.ex. Mail (might be based on Indy), SMS (based on some SMS support component) or others. This will happen when there is a need for it.

[Return to Top]


SQL commands
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

I need a generic client dataset that can do the following:
- Pass the SQL command from client to the server.
- Execute the SQL command on the server.
- Return back the dataset back.


Start with the demo server app. just for simplicity.

On the Unit2 (the TkbmMWQueryService) datamodule, change the AllowClientStatement and AllowClientKeyFields to true. This will allow the client application to send the query statement instead of embedding it in the server.

In the client set the statement in the Query property (its named Query because the client is intended to be compatible also with non SQL queries. Make sure to set the KeyFields property to the name of the field(s) (multiple fields are seperated with semicolon ';') on the client if you will need to resolve changes in the client dataset back to the backend database.

Then simply open the client dataset as a normal TDataset, and the query will be send via the server to the backend database.

I usually try to .... recommend... people not to store the SQL in the client, simply because its alot easier to change stuff in the server than to have to redistribute the client when a SQL change is needed. One of the things I did dislike about all other middleware libraries was the requirement of embedded SQL in the client.

[Return to Top]


SQL commands, are they needed
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

Does kbmMW need SQL's Insert, Update and delete functions? DBISAM supports them all, but FF does not. Which one from your point of view is better to use with kbmMW ?


Nope... kbmMW doesnt explicitly need SQL.

The database adapter determines how to access backend databases, whatever their type. Even the file system could be a backend database. All whats needed is to write a database adapter containing a specialized TkbmMWCustomConnectionPool, kbmMWCustomConnection, TkbmMWCustomPooledQuery/ TkbmMWCustomPooledStoredProc/ TkbmMWCustomPooledDataset for obtaining data from the backend database, and for resolving purposes a specialized TkbmMWCustomResolver/TkbmMWCustomSQLResolver. The TkbmMWCustomSQLResolver contains support for generating SQL automatically,
the TkbmMWCustomResolver does not.

Its correct that FF2 doesnt support INSERT/DELETE/UPDATE SQL statements. Instead the resolving is made using TffTable and SetKey/GotoKey operations to locate the record to be modified or deleted.

kbmMW is very flexible in this way.

I do not have enough experience with either DBISAM or FF2 to select between them. Some people have reported FF2 as being faster, some better like the more full SQL syntax available in DBISAM.

From kbmMW's viewpoint, both are completely fine. There is no special overhead for any of them in the kbmMW code.

I’m not sure what you ask about regarding the pooling mechanism.... but I’ll try to answer anyway :)

Each TkbmMWxxxConnectionPool contains its own pool and cache mechanisms. Access to the pool is managed by the connection pool component and its threadsafe. It locks critical areas for only a short time, and thus can handle many simultanous threads accessing it without being a bottleneck.

Which connection from the connection pool that will be returned is decided on by several criteria’s incl. the length of queues for each connection and the duration of the currently running request.

After getting a connection from the pool, the connection can be used by the service thread. Thus multiple service threads can use connections concurrently. Hope this answers your question.

[Return to Top]


Stored prcedure and trigger methodology
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

Can you explain how a developer can build "trigger" and "stored procedure" methodology on the server side

These two are two different beasts. Lets start with stored procedures first.

There are two kinds of stored procedures... one is a 'real' stored procedure on a backend database. This can be accessed by using the TkbmMWxxxPooledStoredProc components mathing the backend database on the server side. Direct clients is in biggest part implemented, but not yet enabled. This will happen at a later release.

The other kind is making a business object on the server side... in other words a service. This is done by creating a new datamodule _inherited_ not copied from TkbmMWCustomService or TkbmMWQueryService. File/New... select the kbmMW Service objects tab and the type to inherit from. Then make sure to select _INHERIT_ not copy or use. Then press OK.

Inheriting from a TkbmMWCustomService is the way if you need to work with simple procedures as calling a server method and returning a value... similar to normal functioncalls. Inherit from TkbmMWQueryService if you need to publish datasets and handle automatic resolving of data directly from/to a backend database.

In both cases the entry point into a service seen from a service developers point of view is the protected method ProcessRequest. You need to override this and by checking the arguments send with it, do you business logic. Further you need to override a few informational class methods.

This is some code snipped from the TkbmMWInventoryService to show an example of how it’s done:

interface

...

  TkbmMWInventoryService = class(TkbmMWCustomService)
  private
    { Private declarations }

  protected
    { Protected declarations }
    function ProcessRequest(const Func:string; const
ClientIdent:TkbmMWClientIdentity; const Args:array of Variant):Variant;
override;

  public
    { Public declarations }
    class function GetPrefServiceName:string; override;
    class function GetAuthor:string; override;
    class function GetSyntaxAbstract:string; override;
    class function GetSyntaxDetails:string; override;
    class function GetVersion:string; override;
  end;

implementation
{$R *.DFM}

class function TkbmMWInventoryService.GetPrefServiceName:string;
begin
     Result:='KBMMW_INVENTORY';
end;

class function TkbmMWInventoryService.GetAuthor:string;
begin
     Result:='Kim Bo Madsen (kbm@components4developers.com)';
end;

class function TkbmMWInventoryService.GetSyntaxAbstract:string;
begin
     Result:='KBMMW_INVENTORY - Return service inventory information';
end;

class function TkbmMWInventoryService.GetSyntaxDetails:string;
begin
     Result:='"LIST                           - Return list of services available",'+
              '"GET VERSION <servname>         - Return version of service",'+
              '"GET AUTHOR <servname>          - Return author of service",'+
              '"GET ASSISTANCE <servname>      - Return assistance info of service",'+
              '"GET SYNTAX ABSTRACT <servname> - Return general syntax for service",'+
              '"GET SYNTAX DETAILS <servname>  - Return syntax details for service"';
end;

class function TkbmMWInventoryService.GetVersion:string;
begin
     Result:='kbmMW_1.0';
end;

function TkbmMWInventoryService.ProcessRequest(const Func:string; const
ClientIdent:TkbmMWClientIdentity; const Args:array of Variant):Variant;
begin
     // Check function.
     fname:=UpperCase(Func);
     if (fname='') or (fname='LIST') then
     begin
            .. List services available.
          Result:='This is the result.'
          exit;
     end;

     // Look at function.
     if fname='GET AUTHOR' then
        Result:='Some author.'
     else if fname='GET VERSION' then
        Result:='Some version.'
     else if fname='GET SYNTAX DETAILS' then
        Result:='Some syntax details.'
     else if (fname='GET SYNTAX ABSTRACT') 
          or (fname='GET SYNTAX') then
        Result:='Some syntax abstract.'
     else if fname='GET ASSISTANCE' then
        Result:='Some assistance :).'
     else
         kbmMWRaiseUnknownFunc(fname);
end;



Services must be registered for the TkbmMWServer by using either:
kbmMWServer1.RegisterService(TkbmMWInventoryService,true,true,-1);

or if you don’t want to register it under the preferred service name:
kbmMWServer1.RegisterServiceByName('MYINVENTORY',TkbmMWInventoryService,true ,true,-1);

procedure RegisterService(AServiceClass:TkbmMWCustomServiceClass;
Enabled:boolean; Stats:boolean; MaxCount:integer);
procedure RegisterServiceByName(AServiceName:string;
AServiceClass:TkbmMWCustomServiceClass; Enabled:boolean; Stats:boolean;
MaxCount:integer);



Enabled determines if the service is actually available for clients, Stats (new for 0.91a) determine if statistics should be collected for this specific type of service.

MaxCount determines the maximum of instances that are allowed to run of this specific service.

Triggers is something else where the server wants to push information to the client, ie. an inverted client/server relationship. And since its essentially an inverted client/server relationship, you can make the client behave as a server.
All what’s needed is to put a TkbmMWServer on the client along with a server transport component (eg. TkbmMWTCPIPIndyServerTransport) set its properties to a portnumber the other end should use for the contact and then register a service to the kbmMWServer. The service is created in exactly the same way as described a moment ago.

In the ClientIdent object handed to you in ProcessRequest, there is a RemoteLocation property which specifies the remote address of the client making the call. You can use that to know which client to call back to, and then connect to it and call its custom trigger service.

[Return to Top]


Third party vendor additions
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

How do you see third party vendors develop additions to kbmMW? Can you elaborate on this so anyone interested can contact you

There are many areas where kbmMW can be extended... indeed its designed from scratch to be very extendable. Thus I welcome other people that would like to create new transport components, database backend components, transport stream formats, dataset stream formats, special encryption components, special database bound authorization components, plastic wrapped services and clientcomponents and more.

I can see many interesting kbmMW extension packages being developed. F.ex. a complete workflow package with predefined services, clientcomponents and possible streamformats which match workflow, or a complete accounting package with the services and clientcomponents etc. needed for that or.... Theres loads of interesting areas where kbmMW can be extended for special purposes.

The license agreement states that if existing components are improved or changed in any way, those changes must be made available for Components4Developers. The idea of that is that by providing the kbmMW framework very cheaply for developers, the developers must in return let their fixes, enhancements be available to others kbmMW customers. This will give a better product, and a faster turnaround time for development and fixes.

On the other hand, if you f.ex. develop new services and/or clientcomponents, these are not covered by that phrase. Thus there is a huge potential to sell those types of plastic wrapped extensions. Buyers will of course need to purchase a kbmMW license/developer as normal. A bundling price can be negotiated. Please contact kbm@components4developers.com if you have an idea and want to know more.

[Return to Top]


Unsupported datasets
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

I am using the DiamondAccess components to read/write diectly (no BDE, thank god) to an MDB database. These also work with good speed together with CGIExpert. Lets take the example of the Contacts table in M$-Outlook. The user can change records locally (Outlook application) or remotely (CGI webpage). I would like to be able to synchronize all changes between the local and the remote databases at defined intervals. The remote DB is currently an Access database, but could become an Interbase or SQL-Server.

Can I accomplish these tasks with your components? If so, then (with a little help) I could provide you with another example for your component suite. I could also use the image example, if you wish.

Why do I need TkbmMemTbl? Can I use any other query & grid components ?


Actually the BDE is really working quite well.... _if_ it is wrapped in a layer that handle its shortcomings.... kbmMW does that :)

But back to your question. Yes you can do that with kbmMW by simply call the Resolve method of the client dataset whenever changes should be written back to the server.

What is needed in your case is a DiamondAccess connector in kbmMW. That is usually pretty easy to make. kbmMW connectors for DBISAM 3.xx and Interbase via IBX v. 5.xx was released yesterday. FlashFiler is next, then next ones will be scheduled by demand. ADO, Oracle, DiamondAccess, MSSQL, Advantage and many others.

If you want to see how a database connector is made, you can check kbmMWBDE.pas, kbmMWDBISAM3.pas and kbmMWIBX5.pas. This way you will be able to connect to any backend database without having to change anything on your clients, and only little on the server.

The memtable is the basis of dataset manipulation in kbmMW. KbmMW works in general stateless, which means kbmMW doesn’t keep a cursor open towards a backend database. Instead it loads the data from the database, and emulates a cursor towards the client. You will usually not see the kbmMemTable directly, but instead see (for clients) TkbmMWClientQuery and for servers TkbmMWCustomPooledQuery, TkbmMWCustomPooledStoredProc and TkbmMWCustomConnectionPool descendants like TkbmMWBDEQuery, TkbmMWBDEStoredProc and kbmMWBDEConnectionPool.

You will have all the other nice benefits a kbmMemTable is giving too, like filter expressions (on all resultsets! regardless of database backend), fast speed, low memory consumption, easy interaction with other tables and much more.

If you don’t like to use kbmMemTable as the basis for dataset transport, you can choose not to use the kbmMW dataset/database support and just write your own business objects specialized for your purposes

[Return to Top]


Updating only changed fields
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

Is there any way to only update only the field you changed during resolve? It seems that the whole record is being transmitted including blob fields.

Its correct that the complete record is transmitted. The reason is that the server might need fields, which aren’t changed, to be able to update the record on the backend. I see the situation about large fields getting transmitted though. I’ll look into it.

[Return to Top]


Versioning does not work
Ben Hayat Micnet@ix.netcom.com    
03/08/2001

I have created a descendant of kbmCustomDeltaHandler that overrides the InsertRecord, DeleteRecord, ModifyRecord and UnmodifiedRecord methods. When I call the resolve method I have the following problems:

1. Deleted record appear to be physically missing from the dataset and as a result the DeleteRecord method is never called.

2. All records are marked as Unmodified in spite of the fact that I had just inserted, deleted or modified a record as a result the UnmodifiedRecord method is called for all records.

I have tried to call the resolve method before as well as after I call the post method, I have also tried the BeforePost and AfterPost events and I have the same result.
What am I missing here ?


Make sure that you have enabled versioning on the memtable. Also do not call the CheckPoint method before your resolve.

[Return to Top]



The kbmMW FAQ is created and maintained by the kbmMW Documentation Effort Group.
For more information on how to join our efforts, send email to:
[kbmMW FAQ]



This page was created by Help-FAQ Builder