MightyData

FileMaker and WordPress Consultants

  • Company
    • About
    • Contact
    • Team
    • Artists
  • Services
    • Consulting
    • Development
    • Our Process
    • Coaching
    • Training
  • Results
    • Customers
    • Success Stories
    • Testimonials
  • Blog

Server-Side Scripting Improves Performance by 99%

June 20, 2013 by Darren Burgess 7 Comments

Target

MightyData developed an order management system for Lettermen Sports several years ago.  Lettermen Sports is one of the premier sports apparel production companies in Minnesota with locations in Blaine and Plymouth and a large production facility where it completes nearly 300 orders per week for sports apparel screen printing, lettering and shipping.

The primary purpose of its FileMaker application is to manage the flow of orders through its production facility. A core facet of the solution is its interface with the SQL-based point-of-sale program, CounterPoint. The FileMaker solution was developed by MightyData to import orders and order detail from CounterPoint.

Performance of the import/update process began to lag behind the increasingly fast pace of the business as the record counts in some of the FileMaker tables grew to over 300,000. With the update process taking an average of 23 minutes, the in-house FileMaker server performance was being compromised by the load. Employee productivity was reduced by having to wait for orders to be imported, and by the performance hit to the FileMaker server.

Lettermen Sports’ management approached MightyData to re-design the order import process to accomplish the following:

  • Reduce the time required to import and update orders from CounterPoint to the FileMaker order management system
  • Automate the import process
  • Eliminate the up-to-24-hour delay for new and updated CounterPoint orders to be created/updated in the FileMaker database

Technology

MightyData’s technical team arrived at a solution to redesign the import/update process to more closely match the current needs of the business. The original import/update process involved importing data from the point-of-sale solution using FileMaker’s import script step to update matching records and create new records that were unmatched. Import subscripts were then used to import data from across the CounterPoint solution so that data required for shop floor production was available in the FileMaker database. This approach was very effective at ensuring that all data was successfully imported as the solution would import across all orders and line item detail each time the update was run.

As the business rules evolved, however, the update process became more complex. Combined with high record counts, a new approach was required.

The new design was based on only working with new and changed records in the CounterPoint system. In so doing, the number of order records that the scripts were required to process was reduced dramatically, to just 3 or 4 at a time. Conveniently enough, CounterPoint provides an order creation and update timestamp which is updated any time an order or its line-item detail is changed. This allowed for a complete overhaul of the scripting process that simultaneously accomplished all three of the customer’s goals. Furthermore, all script execution was moved to the server, allowing staff to focus on order production and not on managing the update process.

Transformation

This project is an excellent an example of how value is driven by the interface of a customer’s need with great technical planning and execution in a spirit of partnership with the customer to get the job done well. The new solution reduced the update execution from an average of 23.4 minutes to just 2.5 seconds – a 99.8% performance gain. While technically impressive, the real value comes from production staff getting new orders and updates no longer than 30 minutes after they are recorded in the point-of-sale system. If even that is too long, the update can be run manually in a few seconds, giving the team immediate access to new order data.

As an added bonus, MightyData also gets to work with some really good people at Lettermen Sports. They help us help them. And that partnership makes all the difference.

Filed Under: Scripting Tagged With: Case study, Performance, Server-side scripting

Evaluating Script and Layout Performance

December 26, 2012 by Darren Burgess 6 Comments

Layout variables to check performance

I recently implemented a customer reporting project using the Virtual List technique.  In this particular implementation, the customer needed to create a cross-tab style project schedule report with 28 columns and about 30 rows.   Each “cell” in the report is an unstored calculation field that would evaluate a repeating global variable to derive the correct value to display.  Creating the report is a combination of scripting to load values into the global variable and reevaluation of the 800+ unstored calculations in the report.

Concern for the performance of the report drove me to explore ways to evaluate performance.  In particular, I wanted to be able to evaluate script performance vs calculation performance so that I would know where I would get best performance improvements for my efforts.

Evaluating Script Performance

Evaluating script performance was a simple matter of setting the start and end times to variables and calculating the difference at the end of the script.  At the beginning of the script I use the undocumented function Get ( UTCmSecs ) to get milli-second timestamp precision:

Set Variable ( $startTime ) = Get ( UTCmSecs )

At the end of the script I simply calculate the difference between start and end times and divide by 1000 to get a script run-time duration:

Set Variable ( $$TotalScriptTime = ( get ( UTCmSecs ) - $startTime ) / 1000 & " Seconds"

Note: Make sure the data viewer is not visible when evaluating script performance.  The display of fields and variables in the data viewer will slow script execution down.

Evaluating Layout Performance

Of course, script run time is only part of the story.  I also wanted to know how long it took for the report to reevaluate all of the 800+ unstored calculations.  This would provide excellent information into what report elements (scripts vs calcs) had the biggest effect on performance.  I was also curious as to how different calculation methods affected performance (like ExecuteSQL vs FileMaker functions based on relationships).

To accomplish this I added three global variables to the layout:

Layout variables to check performance

The start and end time global variable objects have conditional formatting applied.  Since conditional formatting evaluates whenever a layout is refreshed, I can use a Let () function in the conditional formatting calculation to create and calculate global variables.  I can further take advantage of the stacking order of objects in the layout and cause the start time variable to be calculated first by applying “Arrange/Send to Back” to the start time variable in layout mode.  Here is the calculation used in the conditional formatting for the start time:

Let ( $$reportStartTime = Get ( UTCmSecs ) ; "" )

The end time and total time are created with conditional formatting as well.  In this case, the end time variable’s position is set using “Arrange/Bring to Front.”  Lastly, the position of the total time variable is also moved to the front. Here is the calculation applied in the conditional formatting of the end time:

Let ( [
  $$reportEndTime = Get ( UTCmSecs ) ; 
  $$reportTotalTime = ($$reportEndTime - $$reportStartTime) / 1000 & " Seconds" ] ;
"")

I tuck these globals away into the undisplayed right-hand margin of the layout and slide them out whenever I have the need to evaluate the performance of the report layout.  These global variable objects are easily copied to other layouts or solutions where performance evaluation is needed.

By the way, it takes about 3.3 seconds for the report scripting to execute.  The report unstored calculations take about 0.015 seconds to reevaluate.  Of course, it seems clear that the scripting of this report is the best target to consider when looking at how to improve its performance.

Filed Under: Scripting Tagged With: Performance, Virtual list

FileMaker SQL WAN Performance Trouble

September 6, 2012 by Darren Burgess 10 Comments

One of the new all-the-rage features in FileMaker 12 is the new ExecuteSQL function. When I heard about this new function and its potential to massively simplify the graph, I decided I did not want to wait to convert my customer’s School Management Solution to FMP12 and elected to use the free BaseElements plugin to access the FileMaker internal SQL API in FileMaker 11.

Some months and many hours of  learning curve time later, I was able to successfully implement SQL queries in many areas of my customer’s solution. Since I am using a plugin in a FMP11 solution, I took advantage of the opportunity to use SQL Insert, Update, and Delete queries to create, modify and delete records without having to create any schema to support the actions. As a result, I created very few additional relationships and was able to write significantly shorter scripts as I implemented several new features for the customer.

After deploying the SQL-enabled solution, we encountered a nasty performance issue over the WAN. This solution has a person table with 20,000 records and a related Event table with 80,000 records. Every time the remote WAN user added a new event, FileMaker would freeze for as long as 5 minutes. This issue could not be replicated for LAN users.

Debugging revealed that after the event was added, the very next script step that performed a SQL Select on the Events table was the source of the long delay/freeze.

Here is the original offending SQL code, part of a script step whose intent was to capture the ID of an event for purposes of conditional formatting and data entry control.  This query was part of a more complex calculation, however for our purposes the simplified SQL statement is sufficient:

SELECT id
FROM Events
WHERE id_Person = $personID AND status = 'No' AND method <> 'NonEvent'

In examining this code, I concluded that it could be simplified by adding a field to the Events table, Events::eventIsValid. This is a stored autoenter Boolean calculation that resolved to ‘1’ when the status = ‘No’ and method ≠ ‘NonEvent’. I then added that query to the SQL statement, thereby reducing the query by one condition:

SELECT id
FROM Events
WHERE id_Person = $personID AND eventIsValid = 1

Unfortunately, this change did not significantly affect WAN performance, as there was still a minutes-long delay when this calculation was evaluated.  At this point, in the interest of getting this bug fix delivered to the client as quickly as possible, I elected to abandon SQL as a technique for this particular function in the database. To that end, I returned to a good, old-fashioned, multi-predicate FileMaker relationship that would serve the purpose of finding all of the related event records that were considered valid as per the business rules of the customer.

Lesson learned in this case is to consider all possible solutions when debugging and optimizing a solution. In this case, the new, fancy FileMaker feature would have required more time than was of value to the customer, and so I resorted to a more conventional approach rooted in the pure FileMaker techniques. Ultimately the priority was to serve the customer with a timely resolution of the bug – a simple case of the end result (delivering value to the customer) trumping a focus on the underlying technology.

Stay tuned for a followup to this blog post where I elaborate on the technical solution to the slow SQL query.

Filed Under: Calculations Tagged With: ExecuteSQL, FileMaker 12, Performance

Stop Embedding Documents in Your Database!

August 9, 2012 by Kirk Bowman 30 Comments

Container field in FileMaker 12

Recently, I converted two customer databases from FileMaker 11 to FileMaker 12. Both customers were embedding documents in the database by inserting them into container fields. The result in each case was database bloat—a file size much larger than necessary (1GB or greater).

The disadvantages of a large database in FileMaker are several including slower performance (more data to push over network), lower productivity (long wait times during backups), and greater risk (more susceptible to corruption in the event of a crash). Fortunately, FileMaker 12 has a new feature called Enhanced (or external) Container Fields to address this.

For each database, I converted it by dragging the file onto the icon for FileMaker 12 and going through the automatic conversion process. Next, I opened the database and sorted the field list by “field type”.

Container field in FileMaker 12

Next, I went through each table looking for the any container fields. (You could also use BaseElements or Inspector to search by field type.) For each container field, I changed the storage option to “Store container data externally” under Field Options.

External storage setting for container field

Finally, I closed the Manage Database window to save the changes. FileMaker Pro started moving the documents in the container fields to a folder called “Files” outside the database. It then compressed the database to reclaim the unused space.

The results were impressive. For the first database, there was one container field in one table with 68K records. Each record stored one thumbnail image. By moving the container data to external storage, the database size was reduced 46.5% (750MB to 400MB).

For the second database, there were twelve container fields in one table with 750 records. Each record stored multiple images and PDFs that were not optimized. Moving the container data to external storage reduced the file size by 99% (2.3GB to 3.4MB)

The conclusion is obvious. Stop embedding documents in your database! It increases the file size unnecessarily and creates the negative effects mentioned above. Of course, you have to upgrade to FileMaker 12 to take advantage of this new feature. The only caveat is to test the functionality of the new database before deploying it to production (just in case).

Filed Under: Relational Design Tagged With: Container fields, FileMaker 12, Performance

12 Days of FileMaker 12: Server Performance

May 31, 2012 by Kirk Bowman 4 Comments

FileMaker Server 12 statistics graph

FileMaker Server is the primary tool for monitoring and enhancing the performance of FileMaker solutions. FileMaker Server 12 provides new features to increase the ability of the database administrator or developer to create the best experience for the users.

Performance Tools in FileMaker Server

In this session from the 12 Days of FileMaker 12, I explore six aspects of performance with the new server.

  1. Faster WAN Performance – FileMaker Server 12 can perform finds up to 40% faster, display large value lists quicker, and evaluate filtered portals on the server instead of the workstation. The enhancements improve the performance of FileMaker over a wide-area network.
  2. More Stable Architecture – The server architecture has six separate processes to isolate the impact of one on another. Five of the processes can start, stop or restart from the command line for more control over the server.
  3. More Efficient Backups – Backup schedules only backup the databases that have changed, using hard links for databases that have not changed. The new Progressive Backup goes further by backing up on the data that has changed. Both reduce the time necessary to back up a database.
  4. 64-bit Application – On a 64-bit operating system, FileMaker Server 12 can use up to one terabyte of RAM for the server cache. It is important because RAM is faster than the hard disk. The database, web publishing, ODBC and progressive backup processes are each 64-bit.
  5. Progressive Media Download – With the new Remote Container feature, the server can create thumbnails of large images and stream PDF and video files directly to FileMaker Pro asynchronously.
  6. Faster Web Publishing – The new 64-bit Web Publishing Engine can handle more simultaneous custom web publishing requests, faster than the previous server. Also, session management is faster due to a new caching algorithm.

Even with these new features, performance is a complex interaction of hardware, software (OS), and database design. FileMaker Server 12 includes a new graph view for server statistics and more flexibility in the Log Viewer to help identify weak spots in the performance equation.

FileMaker Server Training

To go deep into performance with FileMaker Server 12, watch the video below including 15 minutes of question and answer.

Filed Under: FileMaker Server Tagged With: 12 Days of FileMaker 12, Backup, FileMaker 12, Performance, Video

12 Days of FileMaker 12: Converting and Upgrading

May 14, 2012 by Kirk Bowman Leave a Comment

You want to start taking advantage of all the great new features in FileMaker 12, but you don’t know where to start? In this session from the 12 Days of FileMaker 12, we explore deploying FileMaker even if your environment has an older version of FileMaker.

Converting and Upgrading to FileMaker 12

Topics in this video include:

  • Testing your existing solution with FileMaker 12
  • Converting your existing solution to FileMaker 12
  • Backing up your data and settings from the current version of FileMaker
  • Uninstalling the earlier version of FileMaker Pro and FileMaker Server
  • Deploying FileMaker Server and connect it to your existing web server
  • Managing existing plug-ins with FileMaker 12

About Tim Neudecker

Tim Neudecker is a leading software engineer who has over 20 years’ experience as a professional FileMaker developer. He spent over ten years as an in-house developer for companies such as Macy’s/Bloomingdales and AGA. In 2003, he joined KyoLogic as co-founder and CTO.

Tim has presented four times to the FileMaker Developer Conference on topics including security, system performance and developer tools. He has published articles in FileMaker Advisor on system integration and performance tuning. In 2003, he was honored with a FileMaker Excellence Award for his contributions to the developer community.

Filed Under: FileMaker Server Tagged With: 12 Days of FileMaker 12, Best practices, FileMaker 12, Performance, Video

12 Days of FileMaker 12: Server File Management

April 30, 2012 by Kirk Bowman Leave a Comment

Changes in FileMaker Server 12’s backup scheme and the new ability to store container data externally offer significant improvements in data protection, performance and data delivery to the user.

File Management in FileMaker Server

In this session from the 12 Days of FileMaker 12, we explore

  • Full backup functionality changes (hard links)
  • Understanding, configuring and restoring from progressive backups
  • Understanding and implementing external container data storage
  • Delivering container data via HTTP streaming
  • Secure vs. open storage

About Colin Keefe

Colin Keefe holds an MFA in Sculpture from Cranbrook Academy of Art. He built his first FileMaker database in the front office of a woodshop in 1994. Subsequently he spent time in a Silicon Alley startup before beginning full-time FileMaker development in 1998, first at EnglishComp and then as an independent developer.

Colin has been with IT Solutions Consulting since 2005, where he is currently a Project Manager in the Application Development department. He holds FileMaker certifications for Versions 7 through 11, is a FileMaker Authorized Trainer, has presented at DevCon, conducted roundtable sessions at Pause On Error, and written for FileMaker Advisor Magazine. He’s also been the coordinator for the Philadelphia FileMaker User Group since 2008.

Filed Under: FileMaker Server Tagged With: 12 Days of FileMaker 12, Container fields, FileMaker 12, Performance, Video

FileMaker 12 Adds Native SQL Queries

April 27, 2012 by Anders Monsen 31 Comments

Probably one of the most talked about features in FileMaker Pro 12, at least from a developer perspective, is the new ExecuteSQL() calculation function. To users interacting with FileMaker records, this is largely a hidden feature. Yet this powerful new option impacts developers in several ways.

Previously, developers used plugins to expose the SQL underpinnings of FileMaker. Now, a portion of SQL has become native to FileMaker. This portion is limited to the “Select” statement. Those same plugins remain viable options for developers who seek to insert, update, or delete records.

The programming language SQL (Structured Query Language, see Wikipedia entry) centers on the query, or Select statement, to pull data from one or more tables. For developers new to SQL there are several books or internet resources available, yet the most concise and immediately useful tool is probably FileMaker’s own ODBC and JDBC guide.

The SELECT Statement

Starting with the most elemental of queries, and breaking down the Select query into multiple lines helps us visualize the different components of the query. We can then add more pieces as we master each section. We start with selecting a field from a table.

SELECT some_field_name
FROM some_table_name

It’s important to note that this statement selects the field name, not the value in the field. So if we said, “select Department from Contacts” we would get a list of all the departments in every record from the Contact table. You can also include mathematical operations or modify strings in the query. To count values you can use count(*) or count(field_name) for all records in the first instance, or only records that contain a value in the field_name.

The WHERE Clause

The Select statement works well in its basic fashion, but what we’re really after is the Where clause, combined with the ability to interact with multiple tables in one query. Rather than building complex relationships in the graph to accomplish the same result, we’re now able to solve this through SQL.

When FileMaker Pro 11 introduced portal filtering we were able to simplify our graph and remove several Table Occurrences, and now with ExecuteSQL the graph can focus more on pure relationships, as this function is context independent. We can query table occurrences from anywhere in FileMaker, without having to navigate to specific layouts or rely on additional table occurrence groups.

The next step along the SQL path is adding the Where clause and additional tables.

SELECT some_field_name
FROM some_table_name
WHERE a_condition_applies

The condition serves to limit the Select statement, as without it you would select all the records in the table. So, a simple Select statement with a Where clause would look like “select Department from Contacts where City = ‘Dallas'”.

ExecuteSQL Syntax

The way the ExecuteSQL function behaves plays a key role in writing SQL statements. According to the calculation dialog, the syntax is:

ExecuteSQL( sqlQuery ; fieldSeparator ; rowSeparator { ; arguments… } )

The sqlQuery is the query string – the select statement. Since this is text, we write this inside double quotes. The fieldSeparator and rowSeparator default to comma and return, so unless we need the result to appear in a different format we can simply use double quotes. The arguments is optional, but will be discussed further below. In this query we can count all the records in the state of Texas, using single quotes for the literal value of TX.

ExecuteSQL( "select count(*) from Contacts where State = 'TX'"; ""; "" )

Pages 55-57 of the ODBC guide list several reserved words. A few of these words might appear in your solutions as field names, such as “order” or “size”, or “First” and “Last” instead of FirstName or LastName (or NameFirst and NameLast). These words will need to be escaped in queries. In addition, field names with spaces also need to be escaped.

ExecuteSQL( "select \"Order\" from Orders where \"Product Name\" = 'FileMaker'" ;"";"")

Unlike MySQL, for example, the ExecuteSQL function will show just ? in the result if there is an error in your query, with no hint as to what caused the error.

Using the Arguments Parameter

The arguments option allows the use of multiple arguments in the sqlQuery string. These arguments use the question mark as a placeholder, and are then listed in order in the argument section.

ExecuteSQL( "
select count(*)
from Books
where zkf_authorID = ?
and  datePublished < ? ;
""; ""; Author::zkp_AuthorID ; Get( CurrentDate ) – 365)

Rather than using conditions outside the current record (i.e. State = ‘TX’), here we input values from a specific record, to see which books were published within a year for the author of the current book record.

The FROM Clause

The ability to consider multiple tables falls under the From clause, but also in this action we must point back to the fields in the Select clause. These fields can be prefixed with the table name or alias. Adding an alias is useful as it reduces clutter in the query. With a table called Customers we can change this to from Customers C, and prefix the field as C.NameFirst in the Select statement. To query across multiple tables we “join” these with an equal sign (=), which will include only matching rows in the results.

SELECT C.NameFirst, C.NameLast, O.DateOrdered
FROM Customers C, Orders O
WHERE C.custID = O.custIDf

Wrapping Up

The new ExecuteSQL function opens up a multitude of possibilities. In scripting you can build queries across tables, populating variables for use in virtual list techniques. In charting you can gather data for dashboards in the same way, without having to jump to various different layouts and loop through records to summarize them. For tooltips you can provide information specific to the current record that looks across related tables. All this can be accomplished without adding a single new table occurrence.

Filed Under: Calculations Tagged With: ExecuteSQL, FileMaker 12, Performance

12 Days of FileMaker 12: Container Field Enhancements

April 23, 2012 by Kirk Bowman Leave a Comment

FileMaker Pro 12 brings a new storage method for container fields. This session from the 12 Days of FileMaker 12 introduces developers to the new storage method from a practical usage point of view as well as from a technical point of view.

Container Enhancements in FileMaker Pro

Topics in this video include:

  • Manage Containers option
  • Thumbnail generation
  • Secure container storage
  • Remote data transfer
  • Inspector options
  • Insert File script step
  • Converting older databases

About  Tim Neudecker

Tim Neudecker is a leading software engineer who has over 20 years’ experience as a professional FileMaker developer. He spent over ten years as an in-house developer for companies such as Macy’s/Bloomingdales and AGA. In 2003, he joined KyoLogic as co-founder and CTO.

Tim has presented four times to the FileMaker Developer Conference on topics including security, system performance and developer tools. He has published articles in FileMaker Advisor on system integration and performance tuning. In 2003, he was honored with a FileMaker Excellence Award for his contributions to the developer community.

Filed Under: Rapid Development Tagged With: 12 Days of FileMaker 12, Container fields, FileMaker 12, Performance, Video

Brainstorming Your Way To The Solution – Part 2

October 25, 2011 by Brad Stanford Leave a Comment

In Part 1, I explained how I started exploring different options to meet a customer request for a specific report (not a standard subsummary report) with acceptable performance (not a resource hog or more than a minute to display).

Combine the Knowns to Solve the Unknown

By thinking through all of the tools and methods I had tried or read about, I was able to assemble a list of the steps I needed to take, and what tool or technique to use on each step:

  1. Gather all the parent and child IDs – GetNthRecordSet( ) function
  2. Convert the sort-able data that references a giant value list to a number that can be sorted easily – Position( ) function
  3. Set the IDs and sort numbers directly in the report records – via portal, to avoid the commit problem (transactional commit)
  4. Report layout – Display related data in dummy records to cut down on CPU usage (similar to virtual list technique)
  5. Sorting – Sort on numbers that have been set into fields by a script, rather than sorting a related field based on a related value list

Here’s how that gathering of tools played out in real life:

  1. Go to the list view of the parent records.
  2. Enter a loop with a variable ($Counter) set as a counter. $Counter will be used later to sort the records into the order they were encountered.
  3. Loop through the parent records using the GetNthRecordSet( ) function, using the counter variable to determine which record to look at.
  4. On each parent record, append a global variable ($$Keys) with the parent record’s ID, two pipe characters (placeholders), and $Counter.
  5. Begin a second loop with its own counter. Using the parent/child key, loop through all the related child records and append $$Keys with: the child record’s ID, the sort order required by a large, user-generated value list, and the parent record’s $Counter.

The calculation for the sort order based on the giant value list is simply:

Position (
	ValueListItems ( "GiantValueList" ; "File.fp7" ) & "¶" ;
	ItemFromChildRecord & "¶";
	[starting at] 1;
	[occurrence] 1
)

This converted the item name to a number, based on where it was found in the value list. So instead of sorting on a related field based on a related value list, I would simply sort on a number field set by a script.

By this point in the script, I will have collected all the values I need. The data looks like this:

1096718||1
471067|362|1
471068|131|1
471069|438|1
471070|48|1
471071|779|1
471072|337|1
8721684|738|1
1096732||2
419861|131|2
2458113|48|2
2458114|3998|2
2459828|779|2
8721807|738|2
1096733||3
575704|362|3
575705|438|3
575706|48|3

Decoded, it means:

Parent Record
RecordSerial | | SortOrder
1096718 | | 1
Child Record
RecordSerial | PositionInValueList | SortOrder
471067 | 362 | 1

All that’s left to do is to loop through the report records and set the fields. Once the key fields are set for either parent or child records, a relationship is established, and I can display whatever I want from that record. In this way, I’m not moving all the data, just the key fields allowing me to place related fields on the report layout.

I’m also setting the sort order in a number field, which will let me group parents and children together. The middle value of the child record data represents the sub-sort order below the parent.

Setting Fields Quickly

When I was setting fields before, I had problems with committing records taking up time. Once again, Todd Geist (and this time, also Jason Young) to the rescue. Looping through a portal does not commit the records as you move from record to record. So I created a third relationship, a one-to-many self-join, using a global key field:

Reports::zzgk_NumberOfRows = Reports::zzk_Row

where Reports::zzk_Row is a pre-populated field.

For the relationship, it’s a text field that was set to the record number of the record in question at time of creation. For sorting, there’s a number version of the field. (And in this scenario, I went ahead and created 30,000 records in advance. That’s probably enough for almost all scenarios for this particular report.)

So now, to continue with the script concept:

  1. Go to a layout based on the Report table that has a transparent scrolling portal based on the Reports::zzgk_NumberOfRows = Reports::zzk_Row relationship. There are no fields in the portal, for speed’s sake.
  2. Set field Reports::zzgk_NumberOfRows to (PatternCount ( $$Keys ; “¶” ) + 1 ). This tells the portal how many rows of report records to show.
  3. Loop through the portal, setting either the parent key and clearing the child key (from the last time the report was run) or vice versa: set the child key, and clear the parent.
  4. While in that portal row, also set any and all sort values.
  5. Exit the portal, commit the record, and all the changes are committed at once.
  6. Navigate to the actual Report layout, and sort the records based on the Parent’s sort number – which places all the parent and children records together in the order that the user had originally sorted the parents – and then sub-sort by the value list sort number for the children records. Because these numbers are set by the script rather than calculated on the spot, they sort very quickly.
  7. Enter preview and present the print dialog.

The fields on the Report layout are actually related fields from the parent and child. Each report record has both sets of fields in the body. If it’s a parent record, the child fields are empty, and set to slide and shrink the enclosing part (the body). If it’s a child record, the parent fields collapse out of the way, and the body once again shrinks. In this way, I just tell a record what it is and use sliding to make it look right.

To make it clear which is which, I put conditional formatting on the parent record:

[ Not IsEmpty ( Self ) ; Turn the parent fields gray ]

This twelve-step process did the trick. And it was much faster than anything I had built before it. Better still, it worked!…Well, almost.

I started having trouble with “ghosting” — that is, when a relationship continues to show the last value it showed, even though the relationship key has changed. As best as I could tell, this was simply some freaky by-product of walking through the Report records through a portal. Like I said before, when you’re innovating, you never know what FileMaker is going to bless or curse.

My solution here was to mirror other processes in this solution and open a small window in which to do the search and set with the fields. Then, when the report was set, I would close the temporary window, and navigate to the report once I was back in the original window. This worked like a charm.

Time for Some Speed Tests

Here are some results based on small-found-set tests that I ran:

  • 272 parent records / ~ 4500 children records – Elapsed Time: 2:45
  • 213 parent Records / ~ 2770 children – Elapsed Time: 3:35
  • 126 parent records / ~ 830 children – Elapsed time: 0:42
  • 208 parent records / ~ 1540 children – Elapsed time: 1:10

The apparent lack of consistency is from variations in the amount or type of data being referenced.

But the times are well under the five to ten minutes my very first scripts were showing. And the CPU dropped from 100+ percent solid for five minutes (sometimes peaking at 190%) to 15% peaking at 30% here and there, running under three minutes most of the time.

That, for me, was the process of brainstorming my way through this problem. I expect there to be some forehead-slapping suggestions like, “Why didn’t you just use the MagicDoWhatINeed( ) custom function?” And my answer will be the same to all suggestions: “Next time, I probably will!”

The point is to try to demystify the innovation process in a continuing effort to become better developers. Also, to encourage everyone to continue chasing down those difficult problems and making them submit.

The bad news: it’s just a lot of hard work sometimes. The good news: if it’s hard, then many will avoid it. That means the playing field is open to whoever wants to be the next famous FileMaker innovator, and acquire the related spoils.

Filed Under: Scripting Tagged With: Guest post, Performance, Reporting

  • 1
  • 2
  • Next Page »

Let’s get started on your project.

It will be more fun than you think.

Get in Touch

  • Company
  • Services
  • Results
  • Blog

Copyright © 2023 · Parallax Pro Theme on Genesis Framework · WordPress · Log in