ISMRM 2017

Dear Friends in Research

Flywheel is heading to ISMRM 25th Annual Meeting, and we couldn't be more excited to sponsor the exhibit and engage with the community.

If you're looking towards your next grant, now is a great time to learn how Flywheel can help reduce your IT burden by providing a single place to capture, manage, analyze, and collaborate with others. To provide you with as much information as possible, we have created template language to add directly to your grant application. We can also provide a personalized letter of support. All you need to do is talk to us at booth 219 or fill out our letter of support contact form.

There are new platform features we've developed to allow researchers even more time to focus on research and not IT. Here is a snapshot on some of them.

  • Project Gear Rules:  Flywheel is introducing Project Level Gear Rules. Project administrators can set up automated tasks that will run when meeting certain criteria. For example, you can run a DICOM to NIfTI conversion Gear every time you upload data to the project with a type of DICOM.
  • Run Gear Configure: Flywheel supports Gears that require configuration parameters at runtime. The platform will render the required fields from the Gear manifest, including recommended default, and allow users to modify the parameters. The values are stored with the analysis after executing the Gear to support provenance and reproducibility.
  • ISMRM-RD: The Flywheel platform now supports the ISMRM-RD format for capturing raw data. Sharing of MRI reconstruction algorithms and code requires a common raw data format. Flywheel accomplishes this with a combination of modality-specific Connectors and converter Gears. For more on ISMRM-RD, visit GitHub.
  • Authentication: Flywheel expanded external authentication providers to include LDAP and WeChat. This feature is in addition to the Google login as the Flywheel default authentication provider.  The platform can be configured to authenticate strictly with a single authentication provider or allow for mixed-mode authentication.
  • Project Dashboard: Reading about it won't do it justice. Come and see the new and improved dashboard at booth 219.
Schedule time to discuss the platform in-depth.

When you stop by our booth, don't forget to inquire about our annual Flywheel Night. All you need for admittance is a ticket from our team. This year we will be hosting free drinks and appetizers at the MW Restaurant which is only a nine-minute walk from the convention center.

See you in Honolulu!

Productive Magnetic Resonance Research

Research in magnetic resonance (MR) imaging and spectroscopy is a flourishing field with roots in multiple Nobel prizes.  Every year methods that improve data acquisition, reconstruction, and post-processing are published with the goal of clinical translation.  Being competitive in this field requires rapid iteration and strong collaboration between the technical methods developers and clinical researchers, and today’s funding environment places an even greater emphasis on early clinical validation as a prerequisite for winning grants.

Methods Development Lifecycle

The majority of MR research projects include a collaboration between researchers with expertise in technical methods (e.g. compressed sensing) and researchers with clinical expertise (e.g. Alzheimer’s disease), and one of the primary determinants on the fruitfulness of a project is the strength of the collaboration between those two teams.  A common division of responsibility throughout the project is:

  1. Clinical team recruits subjects
  2. Technical team develops pulse sequences, reconstruction algorithms and/or post-processing algorithms
  3. Technical team acquires and processes data
  4. Clinical team and technical team perform quality assurance on the data
  5. Clinical team performs the final statistical hypothesis testing

Common Pitfalls

Metadata Tracking

Maintaining accurate metadata about the subjects, algorithms, and data is a crucial component of productive and reliable research.  Ideally, the metadata is stored in a central location, such as a Google Spreadsheet, accessible by all project members.  This mitigates the risk of having conflicting metadata across the team.  

Software Versioning

In many cases, steps 1-4 are performed in parallel over the course of the project, and the pulse sequence, reconstruction algorithms, and processing algorithms can change dramatically between the first and last acquisitions.  As the software evolves over the course of the project the code used to reconstruct or process the images can become dependent on the version of the pulse sequence used to generate the data.  It is critical in these situations to maintain multiple versions of all software components to ensure that all data can be reprocessed correctly to verify results or fix bugs.  Another common situation is when a processing algorithm is changed over the course of a project without tracking which output data has been processed with which version of the software.  This can cause confusion and conflicting results when the final hypothesis testing is done.  

These problems can be mitigated by using robust version control (e.g. Git) in all software components and adding code that performs version checks on dependencies.  Recording data provenance is also critical to verify all data used in final hypothesis testing have been processed through the same pipeline.

Quality Control

The QA process is often distributed and informal, leading to multiple versions of the data with unknown provenance and disagreement over which is the correct one.  These problems can be magnified, or even insurmountable, when research assistants performing the routine QA or analyses leave the project.  The QA results should be stored in the central metadata repository in order to give a single source of truth.  It should also be made clear when a piece of data has not yet undergone QA.  

Successful collaborations avoid these problems through intentional planning and processes that ensure clear metadata recording, software version control, QA, and data provenance tracking.  These collaborations are more likely to have successful outcomes and more likely to quickly leverage the resulting data into more funding for follow-on studies.

Successful Collaborations with Flywheel

Flywheel gives researchers a platform designed around successful collaborations.  Rather than storing critical data in emails and local documents, collaborators have a central portal where:

  • The clinical team can store volunteer information that is instantly viewable by the technical team
  • Data can be viewed by all team members at the time of acquisition
  • Automated QA is run at the time of acquisition
  • There is a central place where manual QA takes place
  • All analyses are performed with versioned code
  • Data provenance is automatically tracked and viewable by all collaborators
  • There is a single source of truth for final hypothesis testing
  • No single person has hidden knowledge about the data or algorithms
  • Data can be easily re-packaged into preliminary studies for future grant applications

Flywheel was designed around the principle that scientists should do science, not IT.  The Flywheel platform provides the infrastructure for productive collaborations where technical developments can become clinically validated quickly and efficiently.  

If you're heading to ISMRM 2017 and want to talk, follow this link.

Or, you can contact Ryan and Flywheel at: