This is useful if you want to automate deployment and running of MDPM on remote environments. In this example, I'll be controlling a remote machine called 'albatross', a Raspberry Pi with username 'pi'. I'll be building a driver and running it doing all the parameterisations through Ansible.
Preparation
Setting up inventory
Create a file called hosts
and put in it the IP address of the remote machine:
...
When you have created a playbook, you have to specify using this hosts file by running ansible-playbook -i hosts playbook.yaml
.
Setting up connection credentials
On the remote system, add your SSH public key to ~/.ssh/authorized_keys
.
vars.yml
Create a file called vars.yml
. This file will have some values for our setup, as well as some options that we shall pass to the driver when we run it. (Those options could alternatively be passed through the command line.)
Code Block |
---|
### TODO Maybe some AWS stuff
### SVN credentials
# FIXME these ought to be secured using ansible-vault, or at least this file should not be readable
svn_username: jmft2 # CHANGEME
svn_password: something # CHANGEME
### Where do you want to store output?
mountpoint: /media/apollo
storagedir: /media/apollo
driver_dir: MyDriver
driver: MyDriver
######
### Options for the driver itself
series: ansible
simulation: tut3 # TODO can we do a loop here?
flags: -name "{{series}}_{{simulation}}"
|
Playbook
In your base directory, create the file playbook.yml
:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
---
- name: Mount S3 storage
hosts: albatross
remote_user: pi
become: no
gather_facts: no
vars_files:
- vars.yml
roles:
- role: mountstorage
- name: Build and run MercuryDPM
hosts: albatross
remote_user: pi
become: no
gather_facts: no
vars_files:
- vars.yml
roles:
- role: workhorse
|
In this example, we shall give albatross
two roles. First, it should mount storage, e.g. an S3 bucket. Next, it should download MercuryDPM, build our driver, and run it.
Mounting storage
If you're storing on a local disc, nothing needs to be done here. If you want to store on a Google Drive, an S3 bucket or a remote filer, then you will need to have a role for mountstorage
.
Create a file at mountstorage/tasks/main.yml
and put the following in it:
(TODO)
Tasks for the workhorse
role
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
--- # tasks file for workhorse - name: update apt cache and install commonprerequisite packages apt: update_cache: yes name: - python3 become: yes - name: Download and build MercuryDPM block: - name: Install MercuryDPM dependencies apt: update_cache: yes name: - build-essential - g++ - gfortran - subversion - cmake - graphviz state: present become: yes - name: Copy MercuryDPM build script copy: src=build_mdpm.sh dest=/opt mode=0755 - name: Download and build MDPM command: bash /opt/build_mdpm.sh register: build_output - name: Print info debug: msg="{{build_output.stdout}}" - name: Print errors debug: msg="{{build_output.stderr}}" - name: Prepare script that builds and runs driver template: src: build_and_run_driver.sh.j2 dest: /opt/MercuryDPM/build_and_run_driver.sh mode: 0755 - name: Run that script # TODO https://stackoverflow.com/questions/39347379/ansible-run-command-on-remote-host-in-background shell: (nohup /opt/MercuryDPM/build_and_run_driver.sh </dev/null >/dev/null 2>&1 &) # command: bash /opt/MercuryDPM/build_and_run_driver.sh async: 10 poll: 0 |
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#!/bin/bash set -eux DPMDIR=/opt/MercuryDPM SIMDIR=/mnt/s3gf/{{series}}/{{simulation}} DRIVER_DIR=$DPMDIR/MercuryBuild/Drivers/{{driver_dir}} DRIVER_PATH=$DRIVER_DIR/{{driver}} SIMDIR={{storagedir}}/{{series}}/{{simulation}} cd $DRIVER_DIR make {{driver}} mkdir -p $SIMDIR cd $SIMDIR $DRIVER_PATH {{flags}} |