4. Advanced Features

4.1. Stored Procedures Support

As most PostgreSQL users know, it does not have stored procedures support. Instead, it has a more flexible mechanism called Functions (created by CREATE FUNCTION statements).

The pgExpress Driver allows using of functions as stored procedures.

An usage example:

Example 2.4. Stored Procedures usage

with SQLDataset1 do
begin
  Close;
  CommandType := ctQuery;
  CommandText := 'create or replace function Test1(int2, int2) returns'
    + ' int2 as ''select $1 + $2;'' language ''SQL'';';
  ExecSQL;
  CommandType := ctStoredProc;
  // This MUST be before setting paramaters.
  // VCL clears Params on setting CommandText.
  CommandText := 'Test1';
  with fDataset.Params do  // Params got automatically filled  
  begin
    Params[0] := 10;
    Params[1] := 20;
     ExecSQL;
    ShowMessage(Params[2].AsString); // Will display 30
    end;
end;

4.2. Fetch Cursors

PostgreSQL supports a feature named "Fetch Cursors". These special cursors allow the records to be retrieved from the server in smaller sets then the defaulbehaviort, which is retrieving the full recordset. To achieve this, the the PostgreSQL DECLARE and FETCH commands are used. This can accelerate the queries response in some situations, specially when the connection to the server is fast and the recordset is large. Also, Fetch Cursors do not need the whole query to be processed: they can start retrieving data as soon as the requested number of rows is available. Finally, Fetch Cursors will consume less memory then regular cursors because fewer rows stays in memory each time.

As of version 1.60, the pgExpress Driver features automatic FETCH cursors. The only thing needed is setting the special param BlockRead; however, you probably will want to change also the RowsetSize param (please check your Kylix /Delphi  documentation) if you want (as of Delphi  6/7 and Kylix  1/2/3, default VCL/CLX  value is 20).

The number of rows retrieved on each internal fetch operation is given by the RowsetSize param. A negative RowsetSize param will issue a FETCH ALL command, what actually does not differ much then a regular query without use of a FETCH command: all rows will be retrieved at once.

Usually larger values, like 50 or 100 (or even more, depending on your environment), would give a better performance. Please try and see what value is best for your application, but have in mind that each FETCH command means a separated query and that could slow down dataset scrolling over slower connections.

A typical dbxconnections(.ini) setup would be:

[PGEConnection]
BlobSize=32
HostName=myhost
Database=mydb,BlockRead=True
DriverName=PostgreSQL
Password=temp123
User_Name=foo
RowsetSize=200

Internally, the pgExpress Driver will automatically execute all SQL commands needed to use Fetch Cursors (DECLARE and FETCH); all operations are transparent to the user.

Notes:

As of PostgreSQL 7.2, a transaction is needed in order to user Fetch Cursors; the pgExpress Driver will start a transaction automatically.

If you don't set TClientDataset.FetchOnDemand to True, Fetch Cursors will be worthless because the TClientDataset will be caching all rows in memory.

The RetainCursor Special Parameter will be ignored if Fetch Cursors are in use. For more details on the inners of Fetch Cursors, please refer to the follow PostgreSQL documentation links:

Note

The Fetch Cursors feature is available only for registered users.

4.3. Authentication

As of PostgreSQL 7.3, three authentication methods are available: password, crypt and md5.

Since the pgExpress Driver is libpq based, use these three methods are automatic, and will follow what you have defined on the pg_hba.conf file (please check here for details).

We advice using the MD5 method (PostgreSQL 7.2 and above), unless you are using SSL or other encryption wrapper in your connection to the server.

4.4. Automatic numeric format detection

The pgExpress Driver has the ability to automatically detect the numeric format used on you server; namely, the Decimal and the thousands separators. There should be no need to set numeric formats manually, but in case you want or need, try the ServerDecimalSeparator special parameter.

In case the automatic detection do not work for you, please mail us at support@vitavoom.com

4.5. Native Int8 support

As of pgExpress Driver v2.0, Int8 fields can be mapped directly to TLargeintField fields using a hack.

Since dbExpress™ does not support Int8 natively, the 1.X series of the pgExpress Driver used to map them as TBcdField fields by default (AsBcd). That behavior is now DEPRECATED.

If you want or need to keep compatibility with pgExpress Driver 1.X, use AsBcd. However, users are encouraged to upgrade their field definitions to use native Int8 support. Doing this is easy: just delete the field definition from the Dataset's field list and add it again.

4.6. Retaining Cursors

The lack of a ISQCursor.First() interface for dbExpress™ cursors make it necessary to run any query twice if you want to access a previous record again; the dbExpress™ technology is currently forward-only. Of course, Client Datasets (TCustomCachedDataset descendants) will cache records so random access is possible; but non-cached SQL Datasets (TCustomSQLDataset descendants) will make the query run again.

The pgExpress Driver implements an experimental setting, RetainCursor, that will make non-cached datatasets access MUCH faster to access from the second time, since the records will be retrieved from the internal libpq cache instead of being retrieved once again from the server. Basically, if RetainCursor is True, the query is a select query and the query is the same as the last one executed, the results displayed will be the same from the cache.

This behavior is controlled by the RetainCursor Special Parameter. If you are using RetainCursor = True and want a particular query to be trully executed again, instead of retrieved from the cache, add a '!' char to the beginning of the query (it will be automatically stripped by the pgExpress Driver):

!select * from pg_type;

Note

This setting will be ignored if Fetch Cursors are being used.

4.7. Large Objects (BLOBs)

Large Objects are primarily read only on the pgExpress Driver, but starting on v2.0, we have added partial writing support.

4.7.1. Large Object field declaration

The Large Object fields declaration must follow the method introduced by the PostgreSQL ODBC driver: a 'lo' type. This type can be created easyly using the following query (PostgreSQL 7.2X and below):

create type lo(
  internallength=4,
  externallength=10,
  input=oidin,
  output=oidout, default='',
  passedbyvalue
);

...or (PostgreSQL 7.3 and above):

create domain lo as oid;

You can then use this type normally on your tables:

Example 2.5. Creating a table with a Large Object

create table employee(id integer, name varchar(30), picture lo);

Note

Some of this code is part of the PGSQL ODBC - FAQ

4.7.2. Using Large Objects (BLOBs)

The pgExpress Driver can read Large Object (BLOB) fields without problems; however, due to the way the dbExpress™ technology was designed and the particular implementation of Large Objects on PostgreSQL, the pgExpress Driver has problems on updating Large Object fields.

As of version 7.32, PostgreSQL refers to Large Objects using a OID that points to the the real data.The libpq library needs this OID to do all sort of operations on the Large Object fields. The problem is that the dbExpressAPI will be expecting only the BLOB field's data; the OID information has no way to be stored. After the BLOB field is processed internally by the VCL/CLX  and is sent back to the pgExpress Driver for being stored in the database, the original OID of the Large Object can't be retrieved, so pgExpress won't know which BLOB is refers to. This means that we can't alter the original Large Object, and if we create a new Large Object, the original Large Object will end up with as a orphan Large Object (a LO that exist but is not referenced by any rows, wasting disk space), unless of course another row refers to the same LO, what is not a common situation.

At Vita Voom Software™, we understand that Large Objects support it is crucial for some users and thus we have the following suggestion as a workaround to this problem, which affects also the JDBC and ODBC drivers:

Notes:

  • This method has the disavantage of wasting OIDs. This should never be a problem since more then 4 billions (2^32) of OIDs are avaiable. However, if you want to save OIDs, the lo_clean() function above could easily be modified to reuse the old OID value in the case of a UPDATE query, by adding an statment such as:
    update pg_largeobject set loid=old.oid where loid=new.oid
    
  • As a final note, the contrib/lo directory on the PostgreSQL distributions contains code that helps avoiding orphan Large Objects; you might be insterested in using it, or even reading the docs for more background on the subject.
  1. Create your table normally, including the BLOB field (please read this section about BLOB fields declaration):

    create table lo_test(a serial, b lo);
    
  2. Create a trigger and function that will delete the Large Object if their values are changed (replace the b fieldname for your real field name):

    create function lo_clean() returns trigger as 
    'begin
      if (TG_OP = \'UPDATE\') then
        if (old.b = new.b) or (old.b is null) then
          return new;
        end if;  
      end if;  
      perform lo_unlink(old.b) where pg_largeobject.loid=old.b;
      return new;
    end'
    language 'plpgsql';
    
     
    create trigger lo_cleanup
    after delete or update on lo_test -- must be after to avoid deleting LO if record is not deleted
    for each row execute procedure lo_clean();
    

    This trigger will avoid that orphan Large Objects be left on the table. Note that triggers are fast; in fact, referential integrity is implemented internally in PostgreSQL by using triggers. If you want to try another code to do it, feel free; all you need to do is ensure no orphan OIDs are left on the pg_largeobject table.

  3. Activate the BlobWriting special setting, and the driver will write BLOBs to the.

Hopefully the PostgreSQL developers will introduce a new API for handling Large Objects that does not suffer from these limitations.

Tips:

If you have orphan Large Objects in a table and want to clean them, try the following approach:

select lo_unlink(lo_column) from mytable;

For a demo suing BLOB fields, try Demos/Fishfact from the pgExpress Driver distribution.