The Journey . . .    
Always Possibilities    

Get to Amazing! Write Efficient Profile Code & Deliver Better Performance

When it comes to Profile code there are the competent masses, the proficient journeymen, some experts, and, if you search hard enough, a handful of wizards. While Chuck Hardy would eschew the magical moniker, he is one of Profile's founding core developers with a career in financial technology services that has spanned more than four decades. He is one of only a few Profile developers with more than 33 years of Profile development experience. He is among Mozaic Group Partners’ most senior consultants and unquestionably qualified as a Profile  code expert extraordinaire. This is Chuck's first, in what weBe a Better Profile Developer hope will be several blog articles authored by him and others, on technical coding tips. This is also the latest in a series of informational blogs published by Mozaic Group Partners on the subject of improving Profile performance. We'll let Chuck take it from here.

---------------------------------------

As many of you are aware, Profile contains a fully integrated ANSI-standard SQL engine that is used consistently across the entire PSL code set. PSL-SQL interprets commands contained in procedural code, optimizes it and builds M code to perform the most efficient SQL operation. As good as PSL-SQL is, it’s still possible to “shoot yourself in the foot” if you write inefficient PSL code. To prevent self-inflicted wounds that can crush performance, this blog article contains several tips that you can use to create the most efficient PSL code possible.

Let SQL Do the Heavy Lifting
Too many times during our code reviews, we find the following: code that selects all records, instantiates the record’s object and then checks various values in the record to determine if the record should be processed. A simple example would be:

      type ResultSet rs = Db.select("CID", "DEP")
      while rs.next() do {
            set dep = Db.getRecord("DEP", "CID = :rs.getCol(""CID"")", 1)
            if dep.index '= "LIBOR" quit
          ...

A more efficient version would move the INDEX check to the where clause:

      type ResultSet rs = Db.select("CID", "DEP", "INDEX = 'LIBOR'")
      while rs.next() do {
            set dep = Db.getRecord("DEP", "CID = :rs.getCol(""CID"")", 1)
          ...

The compiled M code for each version from vOpen of the compiled program appears below.

Version 1:

    vL1a0 S vobj(vOid,0)=0 Q
    vL1a1 S vobj(vOid,1)=$C(254)
    S vobj(vOid,3)=""
    vL1a3 S vobj(vOid,3)=$O(^XCLSACN("D",vobj(vOid,3)),1) I vobj(vOid,3)="" G vL1a0

Version 2:

    vL2a0 S vobj(vOid,0)=0 Q
    vL2a1 S vobj(vOid,1)=$C(254)
    S vobj(vOid,3)=""
    vL2a3 S vobj(vOid,3)=$O(^XREF("INDEX","LIBOR",vobj(vOid,3)),1) I vobj(vOid,3)="" G vL2a0
    S vobj(vOid,4)=$G(^ACN(vobj(vOid,3),50))
    I '($P(vobj(vOid,4),"|",2)="D") G vL2a3

Version 1 collated through every deposit account returning the account number, which allowed us to open the record and look for an interest index equal to “LIBOR.” When you compare Versions 1 and 2, you will notice another very important feature of PSL-SQL. The second version causes fewer records to be instantiated because the WHERE clause allowed PSL-SQL to optimize the code to use the DATA-QWIK index DEP.INTINDEX. As a result, the records returned from Version 2 already have LIBOR as an interest index. And, because we’re selecting from DEP, PSL-SQL is "smart" enough to add a check on CLS to only return deposit accounts!

Use DISTINCT
When writing SQL selects, be aware that selects of keys from a table will give you a results row for each unique key at the lowest level of the table. For example, let's say that you wanted to determine which days a teller had transactions.  In the example below, you would receive more than 25,000 records because the lowest level key, TSEQ, had more than 25,000 individual transactions.

    select tjd, brcd, uid
    from ttx
    where uid=9999 and brcd = 0 and tjd > '31/01/2016' and tjd < '01/03/2016'

    System      Branch  User
    Processing    Code  ID
    ----------  ------  --------------------
    01/02/2016       0  9999
    01/02/2016       0  9999
    01/02/2016       0  9999
    ...
    29/02/2016       0  9999
    29/02/2016       0  9999
    29/02/2016       0  9999
    25372 Rows Processed ... Press any key to continue

By adding “DISTINCT” to the SELECT statement we can determine that there were 29 days that this teller had transactions.

    select distinct tjd, brcd, uid
    from ttx
    where uid=9999 and brcd = 0 and tjd > '31/01/2016' and tjd < '01/03/2016'Profile Code Point A to B

    System      Branch  User
    Processing    Code  ID
    ----------  ------  --------------------
    01/02/2016       0  9999
    02/02/2016       0  9999
    03/02/2016       0  9999
    ...
    27/02/2016       0  9999
    28/02/2016       0  9999
    29/02/2016       0  9999
    29 Rows Processed ... Press any key to continue

DISTINCT causes SQL to ignore data beneath the lowest key selected. If you were relying on the first select to retrieve records for additional processing, you would have performed many thousands more operations than the select using the DISTINCT clause.

Remember JOINs
When trying to identify records, it can be more effective to join records to filter the number of records returned. For example, if you try to identify all savings accounts in ACN order you could write the following:

     type ResultSet rs = Db.select("ACN""RELCIF")

     while rs.next() do {

           type ResultSet rs1 = Db.select("CID""RELCIF""ACN = :rs.getCol(""ACN""))

           while rs1.next() do {
                 set dep = Db.getRecord("DEP""CID = :rs1.getCol(""CID"")", 1)

                 if dep.grp '= "SAV" quit

                ...

Or, you could use a simple JOIN:

     type ResultSet rs = Db.select("CID""RELCIF JOIN DEP","DEP.CID = RELCIF.CID AND DEP.GRP = 'SAV'")
     ...

Don't Forget SQL Aggregate Functions
Instead of doing the math yourself, let SQL handle these aggregation functions for you. See the example below:

    select count(cid), min(bal), max(bal), avg(bal), sum(bal)
    from dep
    where stat <> 4

          CID           BAL           BAL           BAL             BAL
        COUNT           MIN           MAX           AVG             SUM
   ----------  ------------  ------------  ------------  --------------
      1394383     -22267.47   11496397.87       1415.06  19731386161.08

If you need to aggregate data at a specific data point (versus the table’s total in the example above), then you need to use the “GROUP BY” SQL clause. GROUP BY tells SQL the data point to which you are accumulating your data for aggregation. For example:

    select grp, count(cid), min(bal), max(bal), avg(bal), sum(bal)
    from acn
    where stat <> 4 and cls = 'D'
    group by grp

Prod           CID          BAL            BAL            BAL               BAL
Grou         COUNT          MIN            MAX            AVG               SUM
----  ------------  -----------  -------------  -------------  ----------------
CD           35520         0.00      800000.00       11633.12      413208325.88
DDA          58890     -4441.12      123738.00         973.44       57325734.53
SAV         300474     -1000.00      515043.27        9842.94     2957547629.36

Force SQL to Use a Certain Index
PSL-SQL does a terrific job of optimization by finding indexes that provide the most efficient means to retrieve to the data you need. However, what do you do when PSL "selects" an index that you don’t feel is the best fit for your application?  You can change it, of course. You have the power to override the index that SQL determines. Consider the following:

The 6th parameter of the PSL Db.select method receives several parameters. The structure of the select statement is:

    Db.select(String columnList, String tableName, String whereClause, String orderByClause,
    String groupByClause, String PSQLparameters)

The syntax for the parameters variable is:  "/"_parameter1_"="_value_"/"_parameter2_"="_value...

You can force SQL to use a specific index by using the parameter /INDEX=index_name.

The index_name variable refers to a DATA-QWIK table that defines the index. (Note: most indexes are not defined as DATA-QWIK tables, which means you might need to create a table with the structure of the index to use the INDEX parameter.)

If you want to use the order of the table itself, use the table name as the index_name.

---------------------------------------

Profile Code Development by Mozaic

The developers and analysts of Mozaic Group Partners are among the most expert and experienced Profile people in the world when it comes to identifying performance issues and providing brilliant solutions for Profile client institutions. If you have questions about your system’s performance, or think your team could benefit from
a Profile code review session or code workshop, contact us. We are happy to help.

If you found this article helpful, share it with someone you know that could also benefit from its content.

We Are Mozaic Group Partners

We are the leading, independent, global provider of Profile banking application services. We partner with Profile institutions to develop, implement, fix, enhance and protect the investment your institution has made in its banking system. With more than 250 client engagements in 18 countries and counting, we have helped more institutions solve their Profile issues than any other services provider on the planet.

Providing A Global Reach for Profile Services