The Digital Marketplace Runner (‘dmrunner’) is an attempt to simplify setup for new developers, as we have a number of backing services, apps, and dependencies that don’t easily come together in a single place with existing tools like vagrant, docker, or nix. You might want to consider trying this first if you are setting up the Digital Marketplace from nothing, although the rest of the information on this page is still very useful reading to understand more about how the services and apps and repositories all come together to make the Digital Marketplace.
git clone email@example.com:alphagov/digitalmarketplace-runner.git
Do this first, if you’re new (or your Mac is new).
It is recommended to manage Python versions using
pyenv install the version specified in: https://github.com/alphagov/digitalmarketplace-runner/blob/master/.python-version
Using Managed Software Center, install
Xcode (including command line tools, i.e.
git). This will
take a while and will require a reboot.
Using Homebrew, run
brew install <package name> for the following:
phantomjs(for functional tests)
Using Pip3, run
pip3 install <package name> for the following:
Using NPM run the following command to install dependencies we require globally:
npm install -g gulp-cli standard
You will also need to generate an SSH key for your github account (see the Github instructions).
If you’re new and have a Linux laptop, you can mostly follow the instructions for a Mac above.
Don’t install packages using Homebrew. Instead, on Ubuntu run
sudo apt-get install postgresql-client jq phantomjs. Then manually install terraform and cf. Install nvm, then switch to a digital marketplace repo, and run
There are eight Flask applications that make up the Digital Marketplace service. To run the service locally you need to clone each of the repositories and follow the detailed instructions in the README for each application, including initial bootstrapping of the database and elasticsearch.
You don’t have to do it in the exact order listed here, but starting with the APIs makes sense.
git clone firstname.lastname@example.org:alphagov/digitalmarketplace-api.git
git clone email@example.com:alphagov/digitalmarketplace-search-api.git
git clone firstname.lastname@example.org:alphagov/digitalmarketplace-user-frontend.git
git clone email@example.com:alphagov/digitalmarketplace-buyer-frontend.git
git clone firstname.lastname@example.org:alphagov/digitalmarketplace-supplier-frontend.git
git clone email@example.com:alphagov/digitalmarketplace-admin-frontend.git
git clone firstname.lastname@example.org:alphagov/digitalmarketplace-briefs-frontend.git
git clone email@example.com:alphagov/digitalmarketplace-brief-responses-frontend.git
Each application has a Makefile with a
run-app command that sets environment
variables that enable the apps to communicate with each other for local
development setup, and a
run-all command that additionally installs all of the
frontend and backend dependencies.
Each application has a Makefile with a
test command that first runs PEP 8
Python style checks and then (if they pass) runs the tests for the application.
To run the frontend tests you will require PhantomJS installed globally:
npm install -g phantomjs
The port number that the applications run on are set in the
startup script of each application. The default port numbers used are as
(Skipped as it is used by Cisco Anyconnect VPN)
When making requests to your local API instance you should use the bearer token
myToken to authenticate, for example:
curl -H "Authorization: Bearer myToken" localhost:5000
The frontend applications are hyperlinked together but are running on different ports. This can cause links to error when they link between different applications. The way around this is to set up nginx as a reverse proxy so all front end applications can be accessed through port 80.
If you’re not using
dmrunner, the easiest way to do this is to clone the digitalmarketplace-functional-tests
repository and then run the script:
This will do all the setup of nginx that you need, and you should only ever need to run it once. You can subsequently just run:
And that will start up the necessary reverse proxy on port 80, allowing you to access all of your locally running applications seamlessly linked together at http://localhost/
It isn’t necessary (but is probably useful) to populate your database with some “realistic” data.
To import a recent sanitised dump of our production database (i.e. without any real email addresses, etc.)
you can download the
exportdata.tar.gz file from the digitalmarketplace-cleaned-db-dumps S3 bucket,
psql-ing it into your local database.
Slightly more detailed instructions can be found in Importing data for developers.
Once you’ve imported data into your local database, you will need to index any briefs and services so that they will appear in search results (see next section).
Elasticsearch is generally installed directly on the host/development machine at the moment. If you are a new joiner, you can use the Quickstart instructions in the README.md for digitalmarketplace-search-api.
If you are an existing developer with an older installation of Elasticsearch, you will need to stop the Elasticsearch
service, uninstall the old version and purge its data directory (
elasticsearch_<username>/, on a Mac) before installing via the Quickstart. You may need to remove other directories and
files; if you have a problem, speak to other developers for help.
After installation, you will need to index (G-Cloud) services into Elasticsearch. You can use a script in digitalmarketplace-scripts to do this. Example:
./scripts/index-to-search-service.py services dev --index=g-cloud-9 --frameworks=g-cloud-9 --create-with-mapping=services
You will also need to index DOS opportunities (briefs), for example:
./scripts/index-to-search-service.py briefs dev --index=briefs-digital-outcomes-and-specialists --frameworks=digital-outcomes-and-specialists-2 --create-with-mapping=briefs-digital-outcomes-and-specialists-2
Note that this is slightly different to how we index data on preview, staging and production - see Search API.
See also: the Elasticsearch Head GUI, useful for viewing/removing Elasticsearch indices and aliases locally.
You are free to use the editor/IDE of your choice! If you use IntelliJ IDEA Ultimate or Pycharm the following instructions can be used to setup your environment.
Ensure you have run
make requirements-dev in the repository you wish to setup.
For IntelliJ only: Ensure you have installed the Python plugin
Pycharm should be able to detect the interpreter automatically. You can check that in Pycharm > Preferences > Project > Project interpreter
File > Project Structure… > Project Settings > Project
On the drop down for Project SDK select:
Add SDK > Python SDK
Virtualenv Environment > Existing environment > Interpreter > …
Run > Edit Configurations > Add configuration > pytest
Working directory: <CHECKOUT DIRECTORY>
Follow the steps at Choosing Your Testing Framework, from the default test runner dropdown select: pytest.
For more configuration options see also Jetbrains documentation: Run/Debug Configuration: pytest
As well as the main Flask applications there are several other repositories that you are likely to need sooner or later as a developer on Digital Marketplace:
A home for utility functions, Jinja2 filters, constants and anything else that needs to be used by more than one app.
It is pulled into all frontend apps by pip as a Python package, and
flask_init.py gives a consistent application
startup for all frontend apps.
A home for utility functions and scripts specific to unittests.
This is imported by pip as a Python package by all frontend applications and contains functions that wrap calls to both the data and search APIs:
Reusable design patterns and Jinja2 macros for frontend components on the Marketplace (tables, forms, buttons, fonts, etc.)
The frontend toolkit is pulled in to all frontend apps by npm and gulp during the frontend build stage.
This is also known as the “content repository”. It contains all content for individual frameworks - the questions asked during the application process for supplier declarations and service descriptions, framework-specific messages, URLs and dates. All content files are in YAML format. The files here are also used to generate the JSON schemas used by the API to validate certain data fields - see the repository README for more detail.
The frameworks repository is pulled in to all frontend apps (with the exception of user frontend) by npm and gulp (i.e. as part of the frontend build), not as a Python package.
This is imported by pip as a Python package by all frontend applications. It loads the YAML files of content from the “frameworks” repository and processes them into objects that can be used by the frontend apps to generate the framework-specific pages they need to:
This is our acceptance test suite that runs against our live environments. It’s a Ruby project using Cucumber and Capybara. The full functional test suite runs against preview whenever new code is merged to one of the five main Flask apps, and against staging after a release is promoted from preview to staging. The purpose is to check that everything still works as expected. A smaller set of smoketests runs periodically against all live environments to check that all apps are up and running.
There are several processes, especially as part of framework submission and award, that are not (yet) built in to our web applications. We have a collection of scripts that interact directly with our APIs and S3 buckets to do some of these routine tasks.
The code for this manual! The files used to generate this site can be written either in reStructuredText or Markdown. Create a pull request and ask for a review by dropping a comment into #dm-review on the GDS Slack instance.