Git for Finite Element Projects

Finite Element as a Data Scientist” is my new slogan. It’s been great spending a lot of time becoming familiar with docker, git, conda, streamlit, kaggle the last few months. As life is about to become hectic again it makes sense to circle back and see how this knowledge can be applied at field of expertise.

Git Pro book

I have to brag. I read 160 pages of Git Pro. After using git for more than 10 years I knew nothing about it. As a lone developer I’ve stated before that my only use was to be able to jump between computers. With the new CI/CD paradigms it made sense to learn more about how to utilize branches for developing as part of a bigger team as well as ensuring good practices for deployment.

Using a main branch on the remote repo as ground-truth makes sense. Additional branches such as development and test branches allows for testing according to a roadmap while being able to rebase and merge any changes while still being able to jump back and forth.

File Management in Finite Modelling

Most advanced Finite Element packages works like this; you have model database which contains all the information to make jobs. This includes geometry, meshes, load cases, connections and what not – and this is only for one model. A model database file contains several.

The model database files are stored as binary, making them hard to use with Git. Based on experience the industry standard is to store several copies as a project matures. This makes it possible to revisit and branch off and branch in models as needed to keep the model database files lightweight.

Models have a scenario of discretized geometry that is subjected to some kind of loading over one or more steps. These are converted into “jobs” which are called input files. These files contains everything the solver needs to do it’s thing. Input files are written as ASCII and any experienced analyst can modify an input file. These files should be nice to use with git – but the key difference from source files and input files is that it contains large tables of data. Merging changes on input files can be pain.

Abaqus has – what after the best of knowledge – a unique feature. It stores all commands done in the user interface (CAE) as python. There are three different types – but the one that only contains model database changes is called the Journal File. The Journal file can be used to regenerate the model database file if it is lost. The weakness with the Journal File is that all external file references is hard-coded – meaning that full paths are stores. If a file is moved in 5 years time – you’ll manually have to edit all file references. No biggie – but it helps to understand this fact.

To summarize

  • Three types of files that are of interest. Model database (models), input file (job), Journal Files (recreate model database while providing readable python code to understand what is being done).
  • Industry standard is to copy-paste ourselves into oblivion if we don’t keep strict rules / tables that helps us stay focused.
  • Binary and Input files can be hard to use for git in terms of

Using Git with Abaqus

We’ll potentially need some tools to help us use the journal file. All the hard-coded paths must be transformed into “data folder” links. It could also help to “evolve” the .cae files by having “root” .cae file which is referenced for new .cae file which means that “root.cae + new jnl file = current” status. This technique should reduce the length of the jnl files if/when the project grows beyond difference phases. This also ensures that serial execution of jnl files works – but it requires that strict rules are obeyed.

Another alternative is to try the “Python3 library”. This would allow for the use of Jupyter notebooks which is git-friendly enough for collaboration! Definitely worth a try!

Outline of Git Experiments

CAE + JNL Files

Run a small project with a locally hosted repo. Use a main / development branch and use rules to reduce model database clutter.

Each project consists of two cae files

  • Project Models
  • Sub Models (for Development)

Finished Models are joined into Project Models for Various tasks such as Mesh Convergence studies, reporting and similar. These are the resulting models stored and/or shared for transparency.

Juypter Files using PyAbaqus

TBD

Refernece to lib:

https://pypi.org/project/pyabaqus/

I am sure that collaboration and integration of ML will benefit from a pure-python based approach – where assembly (materials, parts, sections interactions) is combined by a load-cases (surfaces/sets + loads/bcs) is the most flexible way possible to both collaborate, store models and run parametric studies.

Being able to do so in a managed way with git is only a happy bonus.

Leave a Reply

Your email address will not be published.