The Journey . . .    
Always Possibilities    

Profile Performance — Tried & True Tips to Keep You on Top

Profile Performance Tips Mozaic Group Partners Cropped 600x382Transformative: Tried & True Top Tips To Improve Profile Performance

 

Daily, Profile processes tens of millions of accounts for some of the world’s largest banking institutions as they close one business day and open the next.  The goal, of course, is to accomplish it all with speed and precision.  It’s not by chance that it happens.  Achieving consistently high levels of system performance requires a commitment to system maintenance and smart, efficient coding.  Unfortunately, in too many instances, negligence, lack of knowledge and lax standards place some Profile systems on a path of performance degradation and ultimately, processing failure.

To help resolve Profile system performance issues, Allan Mattson, co-founder of Mozaic Group Partners and one of Profile’s founding architects and developers, is sharing some of what he has learned after nearly four decades of refining the Profile performance experience for client institutions around the world.  This is one in a series of occasional blog items to be published by Mozaic Group Partners on the subject of Profile performance.

Performance: Adopt a Philosophy. Optimize Quality
Allan:    I’m going to start by stating the obvious because it needs to be emphasized.  Using preventive measures are certainly the best and most cost-effective ways to avoid and reduce Profile performance issues.  From our experience, institutions that have not adopted a core performance philosophy are at much greater risk of incurring performance breakdowns than institutions that have made such commitments.  At a high level, those commitments include conforming to standards that focus on efficiency, conducting GTM database tuning / reorganizations and providing regular maintenance.  The painful corollary to performance breakdowns is an elevated risk of operational failure, which is a frightening prospect that should focus most peoples’ attention.

Client institutions value Profile because the code and processes can be customized to meet specific needs and requirements.  At the same time, it is poorly designed custom code that is most often responsible for Profile performance degradation.  To solve the dilemma, institutional software development lifecycles need to emphasize performance optimization during every phase — design, coding and testing.  We also advocate for stringent code reviews that are conducted by developers who are qualified and experienced with performance issues and associated traps.  There are no substitutes for knowledge and experience in this arena.  And finally, keep your staff’s Profile training and mentoring fresh and relevant, and pay particular attention to new coding techniques and analysis tools. 

Minimize Number of Client Message Requests to the Server
Allan:    This issue is elementary and easy to avoid, but we find the offense repeated frequently with front-end developers who make requests to Profile’s.  It’s entirely unnecessary to write code that makes multiple requests to retrieve data when all the data can be retrieved with a single request.  By placing fewer burdens on system resources, you gain efficiencies and maintain performance.  Less becomes more.

Minimize Passes Thru Large Tables
Allan:
    When we refer to batch-related performance issues, we're largely talking about the adverse effects that poorly designed code has on database operations.  For example, a batch process might be needed to extract information from Profile’s Account Table to feed a data warehousing requirement.  But what happens when data from the same table is needed to satisfy another requirement?  When we conduct system performance reviews, many times we find multiple data extracts, each coded independently, accessing the same table.  As a result, you have multiple batch processes each running a pass through the same table when a single pass would work.  When executed for tens of millions of accounts, you can easily see the inefficiency and the unnecessary processing overhead that gets added.  To create a single, efficient pass through the database, best practice calls for modifying and consolidating the batches that use the same table access.  To make this task easier, Mozaic Group Partners has developed a framework that consolidates data extracts for developers.

Use Existing Tools
Allan:    Don’t forget about ZBTTPP.  It’s a custom procedure frequently overlooked and often underutilized.  Profile’s accrual process has a “standard hook” that calls out to ZBTTPP, which takes a snapshot of specified end-of-day accrual values and stores them in a static table.  Using ZBTTPP minimizes passes through the account table.  If you’re not using this functionality, it’s worth a second look.

Cache Small Heavily Read Tables in Memory
Allan:   Another way to reduce the number of database operations and gain performance is to cache small, heavily read tables (CTBL, STBL and UTBL) in memory.  In Profile versions 7.2 and below, this means a developer must specifically cache the record when using PSL method, getRecord().  In versions 7.3 and above, a caching option in the file definition determines table caching.  Profile’s standard, core tables have this attribute pre-configured.  Custom tables, however, should be reviewed to determine how this attribute should be set.  Additionally, UCOPTS, which is a PSL configuration file, can be used to control the size of the cache table.

Profile Performance Mozaic Group Partners Bullet Train FastCode Efficiency Clears the Tracks for
Fast Profile Processing Performance

Avoid Sorting or Joining Large Tables in Reports
Allan:    Before creating a report, it’s important to know several things including the report’s run frequency, the tables / columns you need and the report’s sort order.  A poorly conceived report design places expensive demands on system resources.  Let’s use an example of producing a report sorted by the account opening date for every account in the database.  Using “date account opened” as the report’s sort key, an initial pass is made thru the account table to create an index or sort file.  After the index is created, another pass is then made through the index to generate the report.  If the intent is to run the report periodically, you can lessen the demand on system resources by creating an index definition in Data-Qwik, which presorts the table.  This approach does not apply to one-time reports.  However, all too often, our performance reviews find dozens of customer or account-related reports that do not have pre-sorted indexes.  Bottom line — avoid sorting or joining large tables when possible, or, alternatively, create report index definitions.

Remove Extraneous Reports
Allan:    Want a sure-fire and very easy way to improve performance?  Do a little housekeeping on your report inventory.  When we conduct a performance audit, one of the first places we start is with a review of Profile reports.  In many instances, institutions continue to run reports — both standard and custom — that are no longer needed, required or utilized by users. In one instance, an international client was unaware it was unnecessarily running U.S. regulatory reports, which come standard with Profile.  Reports can be “skipped” using a configuration in Profile’s Queuing System.

Dedicate a Profile Instance for Reporting
Allan:    One sure way to improve performance is for institutions to stop running reports / data extracts directly in their live Profile production environment.  Institutions should consider implementing a report server, which reflects data replicated from the production environment.  Running reports in the replicated environment will significantly reduce the resource demands on the production system.

Optimize Your “Critical Path”
Allan:   The critical path is the period of time when transactions are placed in store and forward (offline) mode during the end of day process.  To close the processing day and re-open the next, an institution needs to complete the critical path efficiently and quickly.  To do this, the end-of-day and the beginning-of-day processes must be configured properly to use either Profile’s Queuing System or a third-party scheduler.  Processes that can be run outside of the critical path should be configured to do so.

Minimize Expensive Function Calls
Allan:    We frequently conduct development workshops on PSL code optimization techniques at client sites around the world. This helps developers understand the performance repercussions behind PSL methods that access the database.  During the workshops, we bring particular attention to computed columns.  Computed columns have significantly greater impact on processing performance than static columns, a fact that many PSL developers are not aware of because they are not familiar with computed columns and how they work.  Our workshops raise that awareness and show developers how to use computed columns efficiently, which minimizes adverse effects on performance.  Computed columns are by nature dynamic.  A reference to a computed column can potentially result in the execution of thousands of lines of underlying M code before returning a value.  Imagine what happens when a reference to a computed column requires a read through account history to return values?  Let’s use the computed column of average monthly account balance as an example.  Since that value is not stored, the CPU must cycle through history to make that computation.  The process also requires additional I/O.  Now, think about having to return that value on a commercial account that has tens of thousands of transactions a day.  It’s mind-boggling the large amounts of overhead that gets added and how quickly demands for processing resources can escalate.  Third-party interfaces are notorious for abusing system resources and causing performance issues.  Our audits have found instances where third-party interfaces request hundreds of columns, many of which are computed, but the data from those computed columns never gets used.  In effect, system resources were wasted computing results that are discarded.  When developing custom computed columns, you need to be sure you are writing efficient code.

Profile Performance Tuning Mozaic Group PartnersImplementing Strong Code & Maintenance Standards
Gear Profile for Top Performance

Under the Hood - Consider the M Impact
Allan:    Does that mean you need to avoid calling computed columns?  No, but developers, when referencing or developing computed columns, must be aware of the impact their PSL code has on system performance.  As developers become more aware of the underlying M code associated with computed columns, they will become more judicious with their coding.  In our workshops, coding techniques are taught that helps developers write code that performs more favorably and efficiently with the database.  It’s important to note that our consultants cut their Profile teeth developing in M.  This allows us to impart additional knowledge and expertise concerning system processes that ultimately compile into M code.  When you learn how things work under Profile’s hood and understand the tools available to help, you can produce far better and more efficient code.

Don’t Forget SQL!
Allan:   Many developers, particularly those working with front-end applications, use SQL to access Profile data.  And like PSL, you need to be careful when selecting data, particularly when joining tables.  In addition, there are ways to structure SQL commands that are more efficient than others.  For example, selecting the account number from the HIST table returns vastly fewer result rows if the DISTINCT clause is used.  Knowledge of SQL, the underlying M code and a firm understanding of table structures will produce more efficient results.

Be Aware of Publicly Scoped Objects
Allan:   Let’s not forget the impact that publicly scoped (declared) variables / arrays and objects have on memory.  Code that does not properly manage publicly scoped variables / arrays and objects can consume inordinate amounts of memory, which adversely affects performance.

Batch It for Day End Speed
Allan:    As you can imagine, Profile’s end of day processing places heavy demands on system resources.  Where possible, we recommend making full use of Profile’s batch processing functionality.  Batch processes, because of their multithreaded, concurrent processing capabilities, handle and process data more quickly and efficiently than their single-threaded procedural counterparts.  For example, it is common for a large Profile institution to require custom analysis on certain account types.  A batch process can perform that function exponentially faster than a regular procedure because the batch spawns multiple processing threads and uses a scheduler to divide the load among the threads.  When we conduct performance audits for client institutions, we look to introduce batch processes where possible and improve efficiency among existing batches.  We also help developers learn how to “tune” batches to improve performance in our workshops.

Avoid TP Restarts with Careful Design
Allan:    TP Restarts are the result of database conflicts between processes.  TP Restarts are GT.M’s way of ensuring that transactions conform to the ACID (Atomicity, Consistency, Isolation, Durability) test.  However, when a large number of TP Restarts occur, the result can be catastrophic to system performance.  In our workshops, we disclose various tools and techniques developers use in designing procedural code that detects and avoid TP Restarts.

Use Transaction Mode 4 for System-Generated Financial Transaction Posting
Allan:   Profile’s transaction engine (TTXP2 in pre-7.x versions, TRNDRV in 7.x versions) is used to post financial transactions to deposit, loan and miscellaneous (General Ledger) accounts.  Various online and batch processes call the transaction engine with parameters that determine how the transactions will post.  One such parameter — Transaction Mode — specifies how the transaction engine processes transactions, which can be accomplished in the following ways: as (Mode 0) batch input; (1) online from teller applications; (2) online from non-teller applications where overrides are automatically overridden; (3) store and forward; (4) system-generated; (5) secondary; or (6) as future-dated transactions.  In many cases, we have seen custom batches call the transaction engine with a Transaction Mode of 1 (online) instead of 4 (system-generated).  In such occurrences, the transactions are written to the teller posting file (TTX).  Transaction Mode 4 eliminates unnecessary read and write operations by not writing to TTX.  In addition, Transaction Mode 4 eliminates TP restarts with TTX, which plays havoc with performance.  While Transaction Mode 4 is used frequently in standard, core Profile batches, developers sometimes by-pass it because the mode’s use is not fully understood.

Don’t Overlook the Database
Allan:   The well being of the database has one of the largest impacts on performance and therefore requires serious and consistent attention.  The health of any Profile database is dependent on proper configuration and tuning.  This includes optimizing the database layout, correctly configuring database parameters, and the operational maintenance of the database.  In our experience, poor database configuration or lack of tuning can lead to performance degradation in excess of 60 percent.

How Do I Know if I Need a Performance Audit?
Allan:    Performance issues can start small or appear suddenly as large problems.  The rule of thumb is to simply always check for performance degradation.  In our experience, where there’s smoke, there’s fire.  Don’t wait for yellow flags to become red.  In some cases, the issue is confined to a certain area and the fix is relatively simple and can be resolved quickly within a few hours.  Other situations, like those that can happen during an upgrade, can be more complicated and take weeks to fully resolve. We were once asked to resolve an issue where the client had completed a Profile version upgrade only to find that its end-of-day processing was taking an unsustainable 36 hours to complete.  After a full audit, we identified and fixed multiple issues, which then enabled the institution to complete its processing day in six hours.  As I mentioned at the top of this discussion, the key is not to wait for issues to occur, but to reduce the likelihood of them happening by adopting a core performance philosophy that includes coding discipline and sound maintenance practices.  The old axiom that an ounce of prevention is worth a pound of cure certainly applies to Profile performance.

Profile Performance Flying Mozaic Group PartnersGet Your Profile Performance Flying Today!

-------------

The developers and analysts of Mozaic Group Partners are among the most expert and experienced people in the world when it comes to identifying and
resolving performance issues for Profile client institutions. 
If you have questions about your system’s performance, contact us.  We are happy to help.

If you found this article helpful, share it with someone you know that could also benefit from its content.

We Are Mozaic Group Partners

We are the leading, independent, global provider of Profile banking application services. We partner with Profile institutions to develop, implement, fix, enhance and protect the investment your institution has made in its banking system. With more than 250 client engagements in 18 countries and counting, we have helped more institutions solve their Profile issues than any other services provider on the planet.

Providing A Global Reach for Profile Services