swirl Guide to OmniMark 6   OmniMark home
docs home 
IndexConceptsTasksSyntaxLibrariesOMX VariablesErrors
 
    Related Syntax  
Using the OMDB library

Using the OMDB library

This document covers the following topics:

Introduction to OMDB

The OMDB library provides a way to interface with relational databases from your OmniMark programs. This library includes functions to:

The current library functions both supports interactions with ODBC-compliant databases, and supports interactions with Oracle 8 (and higher) databases via OCI.

To use any of the OMDB functions you must include the following line at the beginning of your OmniMark program:

    include "omdb.xin"

Accessing an ODBC-compliant database from an OmniMark program

Before you can do anything with your database from an OmniMark program, there must be an open connection to the database. A connection is made using the dbOpenODBC function. To make a connection you must know:

The dbOpenODBC function returns a connection to the specified database as a dbDatabase OMX component.
     process
     local dbDatabase Targetdb

     set Targetdb to dbOpenODBC "Phred" user "Jeremy" password "2002the"
The database connection OMX component is typically assigned to a variable, as in the code sample above.

Accessing an Oracle database via OCI from an OmniMark program

OMDB also supports accessing Oracle 8 (and higher) databases via OCI instead of ODBC. Which function you use depends on which Oracle client you have installed. If you have the Oracle 8 client libraries installed on your machine, then you use the dbOpenOCI8 function to connect to the database. If you have the Oracle 8i client libraries installed on your machine, and you are connecting to an Oracle 8i (or higher) database, then you use the dbOpenOCI8i function.

     process
     local dbDatabase Targetdb

     set Targetdb to dbOpenOCI8i "Phred" user "Jeremy" password "2002the"
The database connection OMX component is typically assigned to a variable, as in the code sample above.

Creating a Database Connection

An open database connection operates in one of two transaction modes:

In the example below we open two different ODBC connections to the database "MarketStats".
     process
     local dbDatabase Targetdb
     local dbDatabase MyOwnDB

     set Targetdb to dbOpenODBC "MarketStats"
     set MyOwnDB to dbOpenODBC "MarketStats"

     dbSetTransactionType MyOwnDB to DB_MANUAL_COMMIT
Although MyOwnDB and Targetdb are both connections to the same database, each connection behaves independently. The connection Targetdb has the default transaction mode of auto-commit. In this mode, every insert, delete, or update operation is a permanent change to the database.

The connection MyOwnDB has a transaction mode of manual commit. In manual-commit mode insert, delete, and update operations are not permanent, and cannot be seen by the Targetdb connection, until they are committed using dbCommit. The dbRollback function is used to cancel all uncommitted operations associated with a specific connection.

You can also create connections to databases by different methods. For example, you can have both ODBC and OCI connections open at the same time:

     process
     local dbDatabase Targetdb
     local dbDatabase MyOwnDB

     set Targetdb to dbOpenODBC "MarketStats"
     set MyOwnDB to dbOpenOCI8i "MarketStats"

A database connection is closed when:

Executing SQL statements against your database

The dbExecute function can be used with any valid non-selecting SQL statement (that is, anything except a query). For example:

     local stream Add-Table initial
        { "create table AvailableDownloads "
       || "(ItemID integer not null, ItemType char(6), "
       || "AccessLevel char(1), "
       || "constraint item_id primary key (ItemID));" }

     dbExecute Targetdb sql Add-Table
What is considered valid SQL will depend on the SQL server for your database.

The dbQuery function allows you to execute any SQL query statement against an open database. The result of the query is returned in a shelf of type dbField. The resulting value for a field is placed in a separate item on the shelf and given a unique key. The key corresponds to the name of the field in the table the queried data was obtained from.

     process
     local dbDatabase XLdb
     local dbField NewUsers variable

     set XLdb to dbOpenODBC "Phase2"

     dbQuery XLdb
        sql ( "select * from OldList " )  record NewUsers
In the example above, the table OldList contains three columns: Name, UserID and PhoneNum. Each row of the result set from this query will contain three items: In the case above, each data item in the result set came from a column or field with a distinct name. However, this is not necessarily always the case. If the query involves multiple tables, it is possible that data items in the result set could originate from two columns with identical names. The example below queries two fields named "Name": one in the Course table, and one in the Student table.The unique key names are created from the column names by adding a suffix to the next occurrence of any duplicate field name.
     dbQuery XLdb
        sql ( "select C.Name, S.Name, SC.Grade "
           || "from Student S, Course C, StudentCourse sc "
           || "where SC.CID = C.CID and S.SID = SC.SID "\
        record NewUsers
Each row of the result set for this query will contain three items: where NewUsers {"Name"} represents the course name and NewUsers {"Name*2"} represents the student name.

Working with the result set from a query

The results from a query are returned in a shelf of dbField items. Individual result items are accessed by using a unique key. In the example:

        set CourseName to dbFieldValue NewUsers {"Name"}
        set StudentName to dbFieldValue NewUsers {"Name*2"}
        set Grade to dbFieldValue NewUsers {"Grade"}
previously defined variables are assigned the values from the current row of the result set. The value of an item in the result shelf is accessed using the function dbFieldValue.

But what if the result set from the query is empty (no data met the selection criteria of the SQL query)? The first call to dbFieldValue would result in an exception being thrown. The way to avoid this is to test if there is a record in the results set to be read before attempting to read a record.

This test is performed using the function dbRecordExists. dbRecordExists returns TRUE when there is a row in the results set to be read, and FALSE when there is not. For example:

        do when dbRecordExists NewUsers
           set CourseName to dbFieldValue NewUsers {"Name"}
           set StudentName to dbFieldValue NewUsers {"Name*2"}
           set Grade to dbFieldValue NewUsers {"Grade"}
        done

When a query results in multiple rows, we need to be able to move to the various rows in the result set. This is done using the function dbRecordMove.

     repeat
        exit unless dbRecordExists NewUsers
        set CourseName to dbFieldValue NewUsers {"Name"}
        set StudentName to dbFieldValue NewUsers {"Name*2"}
        set Grade to dbFieldValue NewUsers {"Grade"}
        	. . .
        dbRecordMove NewUsers
     again
This code in this example tests for a record, processes the record if the test succeeds, and then moves to the next record. As the loop repeats, the test for a record is performed again. If the move to the next row of the data set did not move beyond the boundary of the result set, the test will again succeed and processing can continue.

The function dbRecordMove allows you to make several types of movement through a result set. Which of these types are available for you to use in your particular case depends on the capabilities of the database driver involved. Many ODBC drivers, as well as the OCI drivers, only support a move to the next row. Other ODBC drivers allow more complex movements through the result set. For example

        dbRecordMove NewUsers to DB_FIRST
repositions the data cursor to the first row of the result set NewUsers. The code
        dbRecordMove NewUsers by 2
skips the next row, and positions the data cursor on the row following the skipped row. A complete list of the moves supported by the dbRecordMove function (though not necessarily the database driver you are using) are detailed in the reference for the function. If you attempt a movement of the data cursor that is not supported by the database driver you are using, the external exception OMDB501 will be thrown.

Compiling SQL statements

There is often a requirement for a single SQL statement to be executed many times in a single program. The OMDB function library provides functions that allow you to pre-compile any valid SQL statement once, and then execute the pre-compiled statement. These functions offer an efficiency over the functions dbExecute and dbQuery that both compile and execute a statement each time they are called.

The dbStatementCompile function requires a SQL statement and a database connection as input. If the SQL statement requires input data, you can explicitly specify the data types for the input data in a shelf, or let OMDB to determine the data types automatically.

The following example shows the code to compile a SQL statement for the ODBC interface and explicitly specify the data types of the input parameters.

     local dbStatement CompanyInsert
     local dbStatement CompanyQuery
     local integer inserttypes variable initial { DB_TEXT, DB_INTEGER }
     local integer querytypes variable initial { DB_TEXT }

     set CompanyInsert to dbStatementCompile Targetdb
        sql "insert into Customers (CompanyName,Country) Values (?,?);"
        types inserttypes

     set CompanyQuery to dbStatementCompile Targetdb
        sql ( "select CompanyID from Customers where CompanyName = ?"  )
        types querytypes
The '?''s in the SQL statement are the place holders for the input parameters with the ODBC interface. The shelf inserttypes specifies that the data provided for the first '?' must be text and the value supplied for the second '?' must be an integer. The shelf querytypes contains the specification for a single input parameter of type text.

When using the OCI interface, you need to use unique names beginning with colons. A common technique is to name the first parameter ":1", the second ":2", and so on.

The following example shows the code to compile a SQL statement for the OCI interface and explicitly specify the data types of the input parameters.

     local dbStatement CompanyInsert
     local dbStatement CompanyQuery
     local integer inserttypes variable initial { DB_TEXT, DB_INTEGER }
     local integer querytypes variable initial { DB_TEXT }

     set CompanyInsert to dbStatementCompile Targetdb
        sql "insert into Customers (CompanyName,Country) Values (:1,:2);"
        types inserttypes

     set CompanyQuery to dbStatementCompile Targetdb
        sql ( "select CompanyID from Customers where CompanyName = :1"  )
        types querytypes

The most straightforward way to execute your compiled SQL statement is with the function dbStatementExecute. Data for any input parameters are supplied in a shelf. The data must match the specification made at compile time. That is, there must be the same number of input values, presented in the same order and of the same type. If the compiled statement is a query, a shelf of dbField items must be supplied to receive the result set.

The following sample shows code that will execute the statements compiled in the example above:

     local dbField QueryResults variable
     local stream TableQueryShelf variable
     local stream TableInputShelf variable

     set new TableQueryShelf {"CompanyName"} to Company

     dbStatementExecute CompanyQuery
        with TableQueryShelf
        record QueryResults

     do unless dbRecordExists QueryResults
        clear TableInputShelf
        set new TableInputShelf {"CompanyName"} to Company
        set new TableInputShelf {"Country"}
           to dbFieldValue QueryResults-01 {"CountryCode"}
        dbStatementExecute CompanyInsert with TableInputShelf
     done
This code sample shows keys being used for the input shelf TableInputShelf. They are there only for readability and are not necessary. What is important when using dbStatementExecute is the number of values in the input data shelf, their data types and their position in the input data shelf.

Inserting large data into your database

All other methods of inserting data into a database from OmniMark require the data to be contained in a buffer, which is not desireable for large amounts of data. The dbStatementStreamingExecute function allows data which is external to the OmniMark program to be streamed directly into the database.

Be aware that the order in which parameters are requested for insertion depends on the database driver, and thus requires using a different programing technique.

The following code sample demonstrates streaming data into a database using the OCI interface, with data both coming from a large binary file and coming from a tcp connection.

     local dbStatement stmt
     local integer param-types variable
     local dbField insert-handle
     local stream in-stream
     local tcp-connection tcpcon

     .  .  .
     set new param-types{"1"} to DB_BLOB
     set new param-types{"2"} to DB_CLOB
     set stmt to dbStatementCompile sql "insert into MyTable values (:1, :2)" types param-types

     dbStatementStreamingExecute stmt with insert-handle
     repeat
        exit when not dbFieldPrepare insert-handle

        open in-stream as dbFieldSink insert-handle
        using output as in-stream
        do select key of insert-handle
           case 1
              output binary-mode file "largebinaryfile.bin"
           case 2
              output tcp-source tcpcon protocol IOProtocolMultiPacket
        done
        close in-stream
     again

Manipulating individual database tables without using SQL statements

dbExecute or dbStatementExecute are not the only way to make changes to a database. The OMDB library includes a set of functions for direct table manipulation. These functions (dbTableInsert, dbTableDelete, and dbTableUpdate) have several advantages over simply executing the comparable SQL statements:

The disadvantage of these functions over executing comparable SQL statements is that:

To use these functions you must have open connections to the database (made using one of the dbOpen functions) and to the target table (made using the function dbTableOpen).

The dbTableOpenfunction returns a dbTable OMX component representing a connection to a single table in an open database. Multiple connections can be opened to the same table, and each newly opened connection will behave independently of the others. You may close a connection to a table by allowing it to go out of scope, or by explicitly closing it using dbTableClose.

Using stored procedures

A stored procedure is a set of SQL statements or statements in another language (such as Visual Basic) with an assigned name that is stored in a database in its compiled form. The OMDB library function dbProcedureOpen is used to open a connection to a procedure stored in a connected database and return an OMX component dbProcedure. The use of stored procedures can be helpful in controlling access to data (end-users may enter or change data but do not write procedures), preserving data integrity (information is entered in a consistent manner), and improving productivity (statements in a stored procedure only need to be written once).

Access these stored procedures from your OmniMark program using the function dbProcedureOpen. With most installations and platforms, you can use the simple form of this function, as in:

  process
     local dbProcedure test-proc
     local dbDatabase Targetdb

     set Targetdb to dbOpenODBC "Phred" user "Jeremy" password "2002the"
     set test-proc to dbProcedureOpen Targetdb procedure 'OMDB_PROC_1'    
where 'OMDB_PROC_1' is the name the procedure was stored under. This format does not require you to describe the parameters of the procedure. In some rare circumstances when using the ODBC interface, your ODBC driver, database, or platform will require that you provide specific descriptive details on the parameters used by the stored procedure at the time the connection to the procedure is made. For details on this form of the function, please consult the documentation for dbProcedureOpen.

Once a connection to the stored procedure has been made, you can execute it using the function dbProcedureExecute. This function supplies any input required and executes the procedure.

Shelves are used to handle the input and output parameters and the procedure results. For example:

       local stream in-val variable
       local dbField out-val variable
       local dbField res-val variable

       set new in-val to '7734'
       set new in-val to '34-RTS-485643'
       set new in-val to '47'

       dbProcedureExecute purchase-item
          in in-val
          out out-val
          result res-val
Notice that the single record of items in the input shelf in-val is not keyed. dbProcedureExecute works the same way that dbStatementExecute does: the data items in the shelf must be in a specific order.

The output shelf out-val is a keyed shelf containing a single record. After the execution of the stored procedure this shelf is cleared and then propagated with dbField items attached to the output and inout procedure parameters. The order of the output shelf items corresponds directly to the ordering of the procedure input parameters. Item keys are assigned to the shelf items based on the names of the procedure parameters. For example, :

       output dbFieldValue out-val{ 'ItemCount' }      ||
              ' units were sold%n'
       output dbFieldValue out-val{ 'InventoryCount' } ||
              ' units remaining in inventory%n'
       output 'Total value of the sale was $'          ||
              dbFieldValue out-val{'RETURN_VALUE'}     || '%n'
If the stored procedure is a function that returns a value, the value can be found at the beginning of the output shelf, keyed 'RETURN_VALUE'.

The results shelf supplied with the dbProcedureExecute function is attached to the result set of any SQL query executed in the stored procedure. As a stored procedure can contain more than one statement, it is possible that it may also contain more than one query. Each query may yield multiple records in its result set. The function dbRecordSetMove allows you to move through any number of result sets arising from the execution of a stored procedure. For example:

  	dbProcedureExecute proc
       		result rset
  	repeat
  	     output "Result Set %n"
  	     repeat
  		 exit unless dbRecordExists rset
  		 output "%n Record"
  		 repeat over rset
  		     output "%n%t" || key of rset || '= "' ||
  			    dbFieldValue rset || '"'
  		 again
  		 output "%nEnd of Record %n"
  		 dbRecordMove rset
  	     again
  	     exit unless dbRecordSetMove rset
  	again

The code in this sample reads through a result set in the results shelf until the end of the result set is passed. The call to dbRecordSetMove moves to the next record set in the results shelf if one exists. If dbRecordSetMove indicates that there is another record set in the results shelf, the process of reading through the entire result set is repeated.

      Related Syntax
   dbClose
   dbCommit
   dbDatabase
   dbExecute
   dbField
   dbFieldValue
   dbOpenOCI8
   dbOpenOCI8i
   dbOpenODBC
   dbProcedure
   dbProcedureClose
   dbProcedureExecute
   dbProcedureOpen
   dbQuery
   dbRecordExists
   dbRecordMove
   dbRecordSetMove
   dbRollback
   dbSetTransactionType
   dbStatement
   dbStatementCompile
   dbStatementDiscard
   dbStatementExecute
   dbStatementStreamingExecute
   dbTable
   dbTableClose
   dbTableDelete
   dbTableInsert
   dbTableOpen
   dbTableUpdate
 
----

Top [ INDEX ] [ CONCEPTS ] [ TASKS ] [ SYNTAX ] [ LIBRARIES ] [ OMX ] [ ERRORS ]

OmniMark 6.5 Documentation Generated: December 23, 2002 at 6:24:59 pm
If you have any comments about this section of the documentation, send email to [email protected]

Copyright © OmniMark Technologies Corporation, 1988-2002.