MightyData

FileMaker and WordPress Consultants

  • Company
    • About
    • Contact
    • Team
    • Artists
  • Services
    • Consulting
    • Development
    • Our Process
    • Coaching
    • Training
  • Results
    • Customers
    • Success Stories
    • Testimonials
  • Blog

Brainstorming Your Way To The Solution – Part 2

October 25, 2011 by Brad Stanford Leave a Comment

In Part 1, I explained how I started exploring different options to meet a customer request for a specific report (not a standard subsummary report) with acceptable performance (not a resource hog or more than a minute to display).

Combine the Knowns to Solve the Unknown

By thinking through all of the tools and methods I had tried or read about, I was able to assemble a list of the steps I needed to take, and what tool or technique to use on each step:

  1. Gather all the parent and child IDs – GetNthRecordSet( ) function
  2. Convert the sort-able data that references a giant value list to a number that can be sorted easily – Position( ) function
  3. Set the IDs and sort numbers directly in the report records – via portal, to avoid the commit problem (transactional commit)
  4. Report layout – Display related data in dummy records to cut down on CPU usage (similar to virtual list technique)
  5. Sorting – Sort on numbers that have been set into fields by a script, rather than sorting a related field based on a related value list

Here’s how that gathering of tools played out in real life:

  1. Go to the list view of the parent records.
  2. Enter a loop with a variable ($Counter) set as a counter. $Counter will be used later to sort the records into the order they were encountered.
  3. Loop through the parent records using the GetNthRecordSet( ) function, using the counter variable to determine which record to look at.
  4. On each parent record, append a global variable ($$Keys) with the parent record’s ID, two pipe characters (placeholders), and $Counter.
  5. Begin a second loop with its own counter. Using the parent/child key, loop through all the related child records and append $$Keys with: the child record’s ID, the sort order required by a large, user-generated value list, and the parent record’s $Counter.

The calculation for the sort order based on the giant value list is simply:

Position (
	ValueListItems ( "GiantValueList" ; "File.fp7" ) & "¶" ;
	ItemFromChildRecord & "¶";
	[starting at] 1;
	[occurrence] 1
)

This converted the item name to a number, based on where it was found in the value list. So instead of sorting on a related field based on a related value list, I would simply sort on a number field set by a script.

By this point in the script, I will have collected all the values I need. The data looks like this:

1096718||1
471067|362|1
471068|131|1
471069|438|1
471070|48|1
471071|779|1
471072|337|1
8721684|738|1
1096732||2
419861|131|2
2458113|48|2
2458114|3998|2
2459828|779|2
8721807|738|2
1096733||3
575704|362|3
575705|438|3
575706|48|3

Decoded, it means:

Parent Record
RecordSerial | | SortOrder
1096718 | | 1
Child Record
RecordSerial | PositionInValueList | SortOrder
471067 | 362 | 1

All that’s left to do is to loop through the report records and set the fields. Once the key fields are set for either parent or child records, a relationship is established, and I can display whatever I want from that record. In this way, I’m not moving all the data, just the key fields allowing me to place related fields on the report layout.

I’m also setting the sort order in a number field, which will let me group parents and children together. The middle value of the child record data represents the sub-sort order below the parent.

Setting Fields Quickly

When I was setting fields before, I had problems with committing records taking up time. Once again, Todd Geist (and this time, also Jason Young) to the rescue. Looping through a portal does not commit the records as you move from record to record. So I created a third relationship, a one-to-many self-join, using a global key field:

Reports::zzgk_NumberOfRows = Reports::zzk_Row

where Reports::zzk_Row is a pre-populated field.

For the relationship, it’s a text field that was set to the record number of the record in question at time of creation. For sorting, there’s a number version of the field. (And in this scenario, I went ahead and created 30,000 records in advance. That’s probably enough for almost all scenarios for this particular report.)

So now, to continue with the script concept:

  1. Go to a layout based on the Report table that has a transparent scrolling portal based on the Reports::zzgk_NumberOfRows = Reports::zzk_Row relationship. There are no fields in the portal, for speed’s sake.
  2. Set field Reports::zzgk_NumberOfRows to (PatternCount ( $$Keys ; “¶” ) + 1 ). This tells the portal how many rows of report records to show.
  3. Loop through the portal, setting either the parent key and clearing the child key (from the last time the report was run) or vice versa: set the child key, and clear the parent.
  4. While in that portal row, also set any and all sort values.
  5. Exit the portal, commit the record, and all the changes are committed at once.
  6. Navigate to the actual Report layout, and sort the records based on the Parent’s sort number – which places all the parent and children records together in the order that the user had originally sorted the parents – and then sub-sort by the value list sort number for the children records. Because these numbers are set by the script rather than calculated on the spot, they sort very quickly.
  7. Enter preview and present the print dialog.

The fields on the Report layout are actually related fields from the parent and child. Each report record has both sets of fields in the body. If it’s a parent record, the child fields are empty, and set to slide and shrink the enclosing part (the body). If it’s a child record, the parent fields collapse out of the way, and the body once again shrinks. In this way, I just tell a record what it is and use sliding to make it look right.

To make it clear which is which, I put conditional formatting on the parent record:

[ Not IsEmpty ( Self ) ; Turn the parent fields gray ]

This twelve-step process did the trick. And it was much faster than anything I had built before it. Better still, it worked!…Well, almost.

I started having trouble with “ghosting” — that is, when a relationship continues to show the last value it showed, even though the relationship key has changed. As best as I could tell, this was simply some freaky by-product of walking through the Report records through a portal. Like I said before, when you’re innovating, you never know what FileMaker is going to bless or curse.

My solution here was to mirror other processes in this solution and open a small window in which to do the search and set with the fields. Then, when the report was set, I would close the temporary window, and navigate to the report once I was back in the original window. This worked like a charm.

Time for Some Speed Tests

Here are some results based on small-found-set tests that I ran:

  • 272 parent records / ~ 4500 children records – Elapsed Time: 2:45
  • 213 parent Records / ~ 2770 children – Elapsed Time: 3:35
  • 126 parent records / ~ 830 children – Elapsed time: 0:42
  • 208 parent records / ~ 1540 children – Elapsed time: 1:10

The apparent lack of consistency is from variations in the amount or type of data being referenced.

But the times are well under the five to ten minutes my very first scripts were showing. And the CPU dropped from 100+ percent solid for five minutes (sometimes peaking at 190%) to 15% peaking at 30% here and there, running under three minutes most of the time.

That, for me, was the process of brainstorming my way through this problem. I expect there to be some forehead-slapping suggestions like, “Why didn’t you just use the MagicDoWhatINeed( ) custom function?” And my answer will be the same to all suggestions: “Next time, I probably will!”

The point is to try to demystify the innovation process in a continuing effort to become better developers. Also, to encourage everyone to continue chasing down those difficult problems and making them submit.

The bad news: it’s just a lot of hard work sometimes. The good news: if it’s hard, then many will avoid it. That means the playing field is open to whoever wants to be the next famous FileMaker innovator, and acquire the related spoils.

Filed Under: Scripting Tagged With: Guest post, Performance, Reporting

Taming the Virtual List – Part 2

October 17, 2011 by Lisette Wilson 14 Comments

Part 1 provided a broad outline of the Virtual List Technique. Part II will step through a specific example of how this technique solved a MightyData customer’s request.

Requirements

This customer needed a sales order and invoice tracking system, and provided a sample of the current sales order in Excel. I mimicked this layout in FileMaker, with a header showing purchaser details, a body for line items, a subsummary by purchaser for order totals, and a trailing grand summary for a page of legal language. The customer had to submit these sales orders to another department for processing, and they required a specific format. The section with totals needed to appear on every page except the final page with legal language; however, the totals themselves should only be shown on the last page of line items, with the boxes blank or reading “Continued…” on preceding pages. Each page with line items should show 4 line items, even if some were blank.

In a basic FileMaker format, if there were more than 4 line items to the sales order, all line items would be included on the first page, and the subsummary section with the order totals would be pushed to the next page. The requirement for the blank lines made using the line items themselves problematic. Even an order with 3 line items, that would otherwise fit the format, would not have that blank fourth line.

Table Structure

This database started with a foundation of the SeedCode Complete template, so the building blocks for the Virtual List were already in place, in the CalendarRows table of the Interface file.

I added a global text field to the CalendarRows table (gInvoiceID) and created a relationship from CalendarRows to a new RowsInvoices table occurrence based on the InvoiceLineItems table (a sales order become an invoice by a change in the status).

Layout Modifications

I duplicated the current layout for the Sales Order, changed layout context to the CalendarRows table, and repointed the fields in the header and subsummary to the related Invoice fields. The line items will be fields in the CalendarRows table, which pull values from the Virtual Lists, and is covered in more detail later.

The script begins on the Sales Order layout, and line items are shown in a portal. The Sales Order should reflect the following details about the line items:

Scripting

In this case, values are required for each of the line item fields, so we can use the List function from the context of the Sales Order layout to build global variables for the Virtual List. If it were possible for a record to have no entry for a field, the list function would not be suitable, without a utility calculation field to ensure a value for each record. If so, I would have looped through the portal rows to build global variables, representing a blank field with a blank line in the global variable value list.

Set the global field (CalendarRows::gInvoiceID) to the primary key for the Sales Order.  Then set the following global variables:

$$sc_Qty = List ( InvoiceLines::Qty )
$$sc_Desc = List ( InvoiceLines::ZDisp_LineDescription_ncr )
$$sc_PriceEa = List ( InvoiceLines::PriceEa )
$$sc_LineTotal = List ( InvoiceLines::PriceLineTotalCalc )

You may be wondering about zDisp_LineDescription_ncr. The line description is built of several fields with some text identifiers, and includes carriage returns. Since the Virtual List relies on getting specific values from the value list, we can’t have carriage returns within the data. This calculation field substitutes a text constant for the carriage returns. That process will be reversed in the calculation to extract the values.

Four more fields were added to the CalendarRows table, to extract the values from the global variables above. Three of the four simply pull the value from the global variable based on the number in the RowNumber field. SalesOrderDescription_c handles that carriage return we needed in the description by reversing the earlier substitution. These four fields are the body of the layout. The fields are four lines tall, with a border. The lines shown within each field are overlaid line elements.

Count the number of values in one global variable to determine the number of line items. Divide this by 4 to determine the total number of pages, and set the initial page value to 1.

Set a variable to the filename to be used for this document, and set the path to:

$path = “file:” & Get ( DesktopPath ) & $filename

The script then loops to find rows in groups of 4 until all pages have been printed. Using Save As PDF for the first page, then Append PDF for subsequent pages. In PDF Options, specify page 1 of 1 until the final page, then include all pages. By always finding 4 records, we will have blank rows when there are fewer than 4 rows with data.

virtual-list-print-loop

The $$lastpage variable is set to 1 for the final page. This allows the script to include all pages, as noted above, and also drives the conditional formatting.

Remember the requirement to only show the totals on the last page of line items? “Continued…” with transparent background overlays each of the total fields. When $$lastpage does not equal 1, the background is changed to white, so that the totals cannot be seen. Some of the totals should only be blank, and for those, the text is also changed to white. When $$lastpage equals 1, the only change is for the font size to be made very large, so large it cannot be shown in the tiny space allotted to the text, allowing the number field beneath to be seen.

Summary

The Virtual List Technique is versatile.  Developers are using it to create amazing solutions. Think of the global variables as columns, with each individual value in that list as the row data. Looping scripts can gather data from portal rows, or a found set of records, or several found sets of records. The data can be simple or complex. Use the GetSummary function to gather summary data, such as totals, averages or percentages, for a cross-tab report. Understanding the basic concepts of the Virtual List Technique is the key; with that foundation and a little imagination, you can solve any number of reporting and display challenges.

Filed Under: Scripting Tagged With: Guest post, Reporting, Virtual list

Brainstorming Your Way To The Solution – Part 1

October 14, 2011 by Brad Stanford 9 Comments

New relationship graph for report

The assignment was simple: A report is not sorting correctly. Fix it. Easy, right?

Little did I know that by taking this assignment I was about to pull out the pebble that would open up the dam. A deluge of trial and error later, the report was fixed, and I was a more knowledgeable developer.

Real-World Example

This is a story that happens every day. Somewhere right now there is a developer  (maybe you?) drowning in a scenario that is threatening to rob time, money, and reputation from him or her. How do you get through it? What does it look like to work through a problem until it’s solved?

As an example, this post will attempt to describe what it looks like when I brainstorm a problem and try to innovate or at least stretch my own boundaries as a developer. I hope this story inspires you to not give up, but to keep researching, asking, and discovering for yourself. There is much innovation to be had, and you could be the one who discovers the “Next Big Thing” that we talk about at DevCon! Those opportunities come through perseverance through tough problems.

I think often times we (me included) view the more “famous” developers as if they’re immune to working hard or being stumped. The reason they make things look easy is because they don’t give up when things get difficult. In fact, I propose that if you want to innovate, you need to embrace the more difficult problems – invite them, in fact. Taunt them with trash talk before you dive in. The hard problems are fertile ground for the very thing we desire most: innovation. If you decide in advance that you’re going to solve this problem and solve it well, then all that’s left is to do it.

This story is also about being honest about where innovation comes from. Throughout this adventure, I’m going to document where the work of others directly influenced the outcome of this project so you can see that I am only as brilliant as the developer next to me. I’m constantly cheating off someone else’s paper! Except, in the FileMaker community, it’s not cheating. It’s expected. This particular community seems to understand that all human innovation is built on prior art. I for one am not ashamed to say it out loud, so I will always try to happily give credit where it’s due.

The Report

Here’s the background to the report I needed to build:

  • The information comes from up to six different relationships, each with at least three predicates, the main relationship having four.
  • Four of these relationships existed as portals in a tab object. One of these relationships was on a developer layout.
  • The other was header information for the parent record on both of these layouts.
  • The sort order for the child records comes from two places: one of the fields in the child table, and a related value list with over 200 items.
  • The user must be able to sort the parent records in any order, and have the resulting child records underneath them.
  • The parent table has over 750,000 parent records.
  • The child table has well over 5,000,000 records. (Yes, that’s millions.)
  • This is not my solution. It’s been around for a while and I have little say in some things, a lot of say in others.
  • I try to avoid things like large indexes if possible.

The first attempt (made by another developer before he wisely handed it off) was with sliding portals in the parent table. The process was roughly:

  1. Create a layout using the parent table as the source of the layout. (This would be in opposition to the standard report building technique of having the child records as the source of the layout.)
  2. Add the children via four portals in the body element, underneath the fields from the parent record.
  3. Make each of the four portals as tall as possible.
  4. Set the portals and fields to slide up and reduce the size of the enclosing part. (Empty portal rows won’t be shown.)
  5. Sort the portals as necessary.

This ultimately didn’t work, because something in portal 4 might need to sort above something in portal 1. Even if this was not the case now, the user could change the value list, and it might be the case in the future.

My Turn

This is where I picked up the project, trashed the portal-based report and started from scratch.

During discovery, I looked for a key/ID field that linked the parent record to all of the child records in the different portals. There was a field that was labeled appropriately enough, but when I threw it out on the layout and compared it to the key fields in the different portals, it only matched some of the line items. It also had a pipe character at the end of it. It looked like this particular key that was supposed to link parent and child records was parsed by some other process, and I didn’t want to break it. So I had no parent/child linking field to work with. It is what it is, and, there was nothing I could do about it.

Defining a new key field over 750,000 records didn’t sound appealing, much less over the 5,000,000 child records. So I started looking for a way to extract the data while keeping the records together and putting the extracted information into a merge table of some kind. After all, the ultimate goal was simply display. All that mattered was that the data was in the right order.

Tap Into DevCon Knowledge

Coming fresh from FileMaker DevCon 2011, I decided to try a virtual list for this report. The routine was something like this:

  1. Loop though each portal in each record, and place the field values for the report into variables.
  2. The report itself will be built of empty records. Each record will contain unstored calculations that parse the data out of the different variables.
  3. Send the variable values from the interface file to the data file were the report resided. This is done by calling a script in the data file with a parameter of the variable that needs to be sent. The script in the data file sets a global variable to the value of Get(ScriptParameter), thus transferring the variable from one file to another. (I learned this trick from Ross Mackintosh here at MightyData and love it!)
  4. Lastly, find the number of report records equal to the number of values in the variable.
  5. Sort.

This technique worked over a small found set, but once the combination of parent and child records exceed 1,000, I started to see the CPU usage go way up. This would never work with 100 users in the system, and it would be devastating if two users ran the report at the same time!

Using the script debugger, I started isolating sections of the report script and watching the CPU meters on the server. I ended up finding two problems: 1) simply navigating from record to record and stepping through the portals was costing me dearly and 2) the report was made up of six fields that were all unstored calculations that were trying to find their values within a thousand lines (or more) in each of six variables. That’s a lot of calculating, and a lot of CPU usage.

So, the virtual list idea as a whole was down, but not quite out. I had the follow-up idea that I should just collect the keys (IDs) for the records in variables, instead of all the field values themselves. This would be far less hassle for both data collection and calculating.

First, I tested to make sure that virtual keys were acceptable to FileMaker and reliable.  You never know what kind of mess you might step into when you’re working unconventionally. Sure, an unstored calc field might work on one side of a relationship, but that doesn’t mean it’s reliable in this case. Ultimately it was.

Then I built a single key field and related it to the parent and child tables. (How efficient of me, right?) Then I used the relationship to determine if the ID represented a “header” record in the report, which in pseudocode would look like this:

If [ IsValid (HeaderRelationship) ; ChangeFieldColor ; Don’tChangeAnything ]

New relationship graph for report

The script looped through the records, gathered the keys into a variable, passed the variable to the target file, and the field definitions in that file parsed the keys out into their records.

Again, this setup worked mostly. But I was getting some weird results (like header records that were also non-header records), and I still hadn’t found a replacement for looping through records when grabbing all the keys to begin with.

Problems With the Data = Good

Well, this is where I found out that the system is not using universally unique keys. In other words, the IDs were not unique from table to table. So when my key field definition went looking for a match in the parent table, it often found one, even though it was a child key that had been parsed out. That’s what was causing my weird results!

However, this was a wondrous accident. While investigating the key fields, I found that the field in the parent table (that supposedly linked to the child table but didn’t) actually had two values in it, not just one, thus accounting for all the children records! Breakthrough!

What had happened is that when I threw the key field on the layout before, I never resized it, or clicked into it because I could see the full value of it. When I saw the pipe character at the end of it, I recognized that from elsewhere in the system as being a parsed value, rather than a divider between two different values.

Report ID partially hidden

Report ID expanded

When I discovered this, I thought I might be able to just take the parent keys off to the side in my report table and walk through them there. I could now build a relationship to all the child records of a given parent key, and somehow work with that.

So I built the six relationships off of the Report table, and tried again:

  1. Loop though each parent record, and capture their linking keys into a variable.
  2. Pass the variable to the report file.
  3. Loop through the keys one by one, putting the parent and children records together as I go.
  4. Sort.

Replacing gray schema with the orange schema

Old relationship graph for report

This time the problem was that I needed to commit the record each time I put in a new key field so that it would forget the relationship this record had last time and display the new data. I tried building in a constant predicate into the relationship so it would refresh by setting a single field to “1” – which I learned from Todd Geist via his post-DevCon video about Commit Record, which you should see if you haven’t.

Unfortunately, that didn’t help. It seemed to be as slow as replacing the field contents itself. So I found myself somewhat back at square one. I decided to shift my thinking to the CPU usage associated with navigating from record to record in the parent table.

My first thought was to compare looping through a list view instead of looping through form view where the child portals lived. Looping through records in list view did help, but not enough for me to tell the client, “It’s fast.”

Next, I tried blank layouts based on the parent table, so that no other fields but mine were being queried while I navigated record to record. That did help, in fact. I got both an increase in speed, and a decrease in CPU usage. But it was still less than awesome. I knew I could do better.

Call for (Virtual) Backup

At this point it was time to sit back and rethink the problem with a clear head. One family board game and a good night’s sleep later, I decided to take a trip through Google to make sure I wasn’t missing anything related to FileMaker speed. I was looking for two things: collecting data as quickly as possible without looping, and best practices for when you absolutely had to loop.

First, I looked for a custom function for grabbing records. And if you’ve ever looked for a custom function, there’s one place Google always takes you: Brian Dunning’s custom function list. A quick search for GetNthRecord yielded a whole host of solutions looking for a problem. But the one that caught my eye was GetNthRecordSet( ). This function allowed me to grab all the values of a given field in a found set of records without looping. This was perfect.

Next, I stumbled across an article from HOnza Koudelka called FileMaker Script Execution Time Cut From 5 Hours to 6 Seconds. That sounded like he had written that article just for me. The main takeaway from that article for me was this: “…the fix often is as easy as replacing the stupid solution with the normal one.”

While I was all fired up about the virtual list and innovating, I still didn’t have a workable solution. What if all I needed to do, as HOnza suggested, was something more traditional? At this point it was worth a try. I was running out of time!

Filed Under: Relational Design Tagged With: Guest post, Performance, Reporting

Taming the Virtual List – Part 1

October 7, 2011 by Lisette Wilson 6 Comments

Have you ever wanted to create a report in FileMaker, but found it just doesn’t work that way? Perhaps you need to combine disparate data from several tables. Or export data where several records are combined into a single row. Or show totals both “across” and “down”, with a pivot table. Or format an invoice so that a set number of rows appear on each page, with a header and summary totals on every page. With the Virtual List Technique, you can achieve all of these and much more.

What is the Virtual List Technique?

It’s a powerful tool cooked up by Bruce Robertson of Concise Design. Put simply, you use a looping script to build a value list in a global variable, and each value or row becomes a field value in a record in a utility table.

It’s somewhat similar to using a dedicated reporting table, where you create records and set fields, but in this case, the values are short-lived. The Virtual List Technique has several advantages.

  • No need to create and destroy records, or track records for this particular report.
  • Multi-user friendly. Since data is stored in global variables, it’s unique to each user. A dedicated reporting table would require overhead to ensure each user viewed only the records for that report.
  • Speed. Global variables live in RAM on the local machine, not on the server hard drive across the network.

It’s not just for reporting. SeedCode Complete utilizes this technique in the calendar displays and in “selectors” where the user is presented with a pop-up window to select a record in another table. fmSearchResults uses it to produce search results across multiple tables.

So, what do you need?

At a minimum:

  • A script (possibly a loop, but the List function is your friend) that builds a return separated value list in a global variable [$$array].
  • A utility table with as many records as rows in your reports. (100 records have sufficed for my needs thus far, but use as many as you need.)
    • A number field filled sequentially (1 to 100) [Row]
    • An unstored calculation field to extract values from the global variable
      [ GetValue ( $$array ; Row ) ]
    • A layout with context of the utility table showing the unstored calculation field, or a portal pointing toward that table.

This versatile technique can be expanded by using multiple global variables and unstored calculation fields, repeating global variables, conditional formatting, and even images stored as a reference. Add this foundational skill to your FileMaker toolkit, and you may never answer a reporting request with, “FileMaker doesn’t work that way,” again. In Part 2, I’ll walk through a specific example of how this technique solved a MightyData customer request for a sales order formatted with a set number of rows on each page, and header and summary totals on every page.

Filed Under: Scripting Tagged With: Guest post, Reporting, Virtual list

FileMaker Music Survey

September 27, 2011 by Matt Navarre 4 Comments

Chart by skill level

Within the FileMaker community, there seem to be several subcultures. Music, photography, woodworking, and knitting just to name a few. At DevCon, we used to have a jam session that was generally packed with too many people wanting to play and sing. We had to impose limits such as only play for 3 songs, and then hand off the guitar or drum kit to the next person in line.

As a musician, I wondered if there was a link between music and FileMaker development – perhaps they are both forms of creative expression, or linked to skill in logic and math? Is there a correlation between expertise in the two fields? How many developers are also musicians?

The Survey: Musician or Not

Matt Navarre and Alexis Allen, both musicians and developers, devised a short survey to hopefully answer some of these questions. Alexis was a little skeptical, since she knows lots of musicians who aren’t terribly good with computers. Matt was more confident, having been at those jam sessions, waiting in line to play guitar.

In a 3 week period, we got 95 responses to the survey, which is a pretty small sample size, and hopefully enough to draw some conclusions. 80% of people surveyed reported being musicians. I suspect that number is skewed, since this was called the ‘FileMaker music survey’, and was more likely to attract a musician.

  • Music Skill – Of the musicians, 92% claim to be intermediate or expert level, and the other 8% identify as beginners. This shows that the musicians in the community tend strongly to be accomplished.
  • Developer Skill – Of non-musicians, 42% claim to be Intermediate developers, and 32% claim to be experts. Of musicians, 38% claim to be Intermediate or above, and 65% claim to be experts.

Chart by skill level

That means musicians are twice as likely to be database experts. This strong correlation is perhaps the most surprising data in this survey. We are making no claims of causation here, but we sure wonder what causes expertise in databases.

Of those with undergraduate or graduate music degrees, 72% claim to be experts. This seems to indicate the that more expertise you have as a musician, the more likely you are to also be an expert with FileMaker. This is strong evidence that there is indeed a link, and that musical training (and not merely talent or skill) somehow influences your ability to excel in FileMaker development.

Link Between Music and FileMaker

One of the questions we asked was whether you perceive a link between expertise in music and FileMaker. 58% of the non musicians reported zero link, and only one person reported a significant link. 18% of the musicians reported zero link, and 30% reported a significant link. The remaining 52% reported a small to moderate link. This seems to indicate that if you are both a musician and developer you’re far more likely to perceive a link. What would the nature of this link be? Does it make it easier to learn the “language” of FileMaker? We didn’t ask, but it’s an interesting question too.

Chart by music interest

Perhaps the link is not directly in the music skill, but in the ‘learning how to learn.’ To become a musician, you need to develop a skill set and understanding of a symbolic language, learn new pieces measure by measure while continuing to understand and perfect pieces as a whole. These skills may translate well in learning database development.

Favorite Composers

We asked questions about several artists and composers. This was to determine what genres of music people prefer, and we also inserted a few obscure artists as trick questions to determine the depth of knowledge in some genres. For example, Heinz Holliger (an amazing classical oboe player) was unknown by 84% of all people taking survey. Of those that do know (and like) him, all were musicians.

Chart by composer

The runaway winners that were liked or loved were The Eagles at 81%, and Igor Stravinsky at 74%. That last one really surprised me.

The most disliked by far was Kenny G at 60% dislike (but 26% who like that no-talent clown that started winning grammys. Oh wait, that’s Michael Bolton.) Overall when you include the ‘Like’ and ‘Dislike’ though, it evened out a bit.

We asked people what career they had before FileMaker, and there was certainly no consensus here. Less than 10% worked in a music related field. A career in an IT relate field seemed to be the most common, at about 18%. The list included doctors, lawyers, students, and a wide array of other fields. It made for very interesting reading.

Our science-style (yet completely unscientific) survey didn’t address a couple of things it would be really cool to know. Is the rate of musicianship higher in FileMaker developers than tech generally? What is the nature of the links people see between their musical training and their FileMaker activities?

Still, the fact that expert musicians were more likely to report expert FileMaker skills does seem to support a correlation. So, all those hours your mom made you practice the piano weren’t a waste after all, and if you’re a struggling musician, maybe you should think about a career in FileMaker to help pay the bills.

Filed Under: News Tagged With: Case study, Guest post

Let’s get started on your project.

It will be more fun than you think.

Get in Touch

  • Company
  • Services
  • Results
  • Blog

Copyright © 2023 · Parallax Pro Theme on Genesis Framework · WordPress · Log in