Framework lifecycle for developers

0 - overview

Each framework goes through a sequence of statuses that define the framework lifecycle:




displays a message to users that the framework will be open for applications soon


suppliers can apply to the framework


applications close, the reports are generated and sent to CCS


results sent to suppliers; successful suppliers sign and return their agreement files


supplier services are available to buyers on the Digital Marketplace


services for the framework no longer available on the Digital Marketplace

There are a number of steps involved in launching a new framework iteration, aside from creating the framework object itself in the database. Follow all the instructions in the Preparing to add a new framework documentation before beginning the process below.


Make sure you coordinate these steps with the framework’s product manager and/or comms team before running on production. There is no ‘delete’ API method for frameworks, so proceed with caution!

1 - coming

The framework will be open for applications soon, and a message will be shown on the Digital Marketplace home page.

Once the setup steps in adding frameworks are complete, the new framework object can be created with status “coming”:

curl -H "Authorization: Bearer <API_TOKEN>" -H "Content-Type: application/json" -X POST -d
   "frameworks": {
       "slug": "g-things-23",
       "name": "G-Things 23",
       "framework": "g-things",
       "status": "coming",
       "clarificationQuestionsOpen": false,
       "lots": ["cloud-hosting", "cloud-software", "cloud-support"],
       "hasDirectAward": true,
       "hasFurtherCompetition": false
}' https://<API_ENDPOINT>/frameworks

Another POST request is required to update the framework with further attributes:

  • the known/expected framework lifecycle dates

  • the allowDeclarationReuse flag

The allowDeclarationReuse and applicationsCloseAtUTC keys are used to determine if declarations can be reused from one framework to the next. If allowDeclarationReuse is not true, or if applicationsCloseAtUTC is undefined, this framework will not be offered to suppliers as a valid source of answers for their declaration.

The other datetimes are used for display purposes only and do not currently affect framework state in any way (i.e. the status is not automatically changed from live to expired at the time specified in frameworkExpiresAtUTC).

Set the lifecycle dates and declaration reuse flag with:

curl -H "Authorization: Bearer <API_TOKEN>" -H "Content-Type: application/json" -X POST -d
    "updated_by": "",
    "frameworks": {
       "allowDeclarationReuse": true,
       "applicationsCloseAtUTC": "2000-01-01T12:00:00.000000Z",
       "intentionToAwardAtUTC": "2000-01-01T12:00:00.000000Z",
       "clarificationsCloseAtUTC": "2000-01-01T12:00:00.000000Z",
       "clarificationsPublishAtUTC": "2000-01-01T12:00:00.000000Z",
       "frameworkLiveAtUTC": "2000-01-01T12:00:00.000000Z",
       "frameworkExpiresAtUTC": "2000-01-01T12:00:00.000000Z"
}' https://<API_ENDPOINT>/frameworks/<FRAMEWORK_SLUG>

Be careful with the timezones; all times in the database and server are in UTC, which only coincides with GMT of the UK from the last Sunday of October until the last Sunday of March. The rest of the year the UK follows the British Summer Time (BST), which is one later (UTC+01:00).

2(a) - open (with clarification questions open)

While a framework is open, suppliers can make their declaration and submit services that they want to provide. Suppliers can also submit their public clarification questions to Crown Commercial Service (CCS) if the framework’s clarificationQuestionsOpen attribute is set. Answers to these clarification questions are published to all suppliers by Digital Marketplace.

  • Before opening the framework, ensure any previous frameworks that we want applicants to be able to reuse declarations from have both their allowDeclarationReuse flag set and applicationsCloseAtUTC field set.

  • Make sure any dates in the content are confirmed.


IMPORTANT: co-ordinate the launch with product/comms team before setting a framework to ‘open’.

To open the framework:

curl -H "Authorization: Bearer <API_TOKEN>" -H "Content-Type: application/json" -X POST -d
     "frameworks": {
         "clarificationQuestionsOpen": true
}' https://<API_ENDPOINT>/frameworks/<FRAMEWORK_SLUG>


./scripts/ <STAGE>
    {'status': 'open', 'clarificationQuestionsOpen': True},
  • The framework’s product manager regularly publishes any clarification question answers by uploading PDFs in the admin. Suppliers should be emailed after each batch of answers has been published (maximum 1 email per day). This can be done with the jenkins job notify_suppliers_of_framework_clarification_questions_answers.

  • Update the stats_to_performance_platform Jenkins job with the new framework slug. The framework’s product manager should be able to set up the new dashboard with the Performance Platform team and obtain any new credentials.

  • Update the export_supplier_data_to_s3 Jenkins job with the new framework slug. This allows the framework manager to download the supplier contact list from the admin, to email them with clarification question answers and other updates.

2(b) - open (with clarification questions closed)

After the clarificationsCloseAtUTC date has passed, suppliers can no longer submit clarification questions to CCS (though they can still submit private questions directly to CCS about their application, which aren’t published).

Clarification questions must be closed manually, with:

curl -H "Authorization: Bearer <API_TOKEN>" -H "Content-Type: application/json" -X POST -d
         "clarificationQuestionsOpen": false
}' https://<API_ENDPOINT>/frameworks/<FRAMEWORK_SLUG>

Reminding suppliers to complete their application

One week before applications close, a developer should run the Jenkins job Notify suppliers with incomplete applications - production to send email reminders (via Notify) to suppliers with incomplete applications, listing the steps they have left to do.

Scaling up the apps

Traffic will increase sharply in the week before applications close. Be ready to scale up the number of instances on the Supplier frontend, the API, and potentially the router and Antivirus API apps.

See Scaling PaaS apps for information on how to do this.

3 - pending

Applications for the framework are complete, and suppliers can no longer edit their declaration or services.

Close applications with:

curl -H "Authorization: Bearer <API_TOKEN>" -H "Content-Type: application/json" -X POST -d
     "frameworks": {
         "status": "pending"
}' https://<API_ENDPOINT>/frameworks/<FRAMEWORK_SLUG>

Any apps that were scaled up prior to applications closing can now be scaled down to the normal number of instances.

The stats_to_performance_platform Jenkins job to send data to the performance platform can now be disabled. You may need to contact the Performance Platform administrator to get the correct final totals to show up on the dashboard.

Exporting data for CCS

Application data for the framework now needs to be exported and sent to CCS for evaluation as soon as possible after applications close. There are some test supplier accounts in production for use by the developers and CCS. These can be found in the credentials repo, and should be excluded from any reports or framework awards.

A developer should carry out the following tasks in the right order:

  • Ensure the framework’s intentionToAwardAtUTC date - and indeed the other dates - are set and correct

  • Run the Notify suppliers whether application made for framework job on Jenkins. This will check the application status of all suppliers who showed interest in the framework and email them to say either “we got your application” or “you didn’t apply to the framework”.

  • Run from our scripts repository to generate the list of supplier applications. This shows how far each supplier had got once they’d started their application, i.e. whether they completed their declaration and the number of services submitted/left in draft in each lot.

  • Run the Mark definite framework results job on Jenkins to determine automatic passes, automatic fails and discretionary award statuses and set the majority of on_framework values in the database. The assessment schema previously generated in the frameworks repository is used here to validate the supplier declarations. Make sure the schema has been committed to the scripts repository, so that Jenkins can access it.

  • Run from our scripts repository to export the results from the previous step, ready to pass to CCS. It generates three files, one each for successful, failed and discretionary results.

  • (G-Cloud only) Run the Scan G-Cloud services for bad words job on Jenkins, specifying the ‘draft’ services option. Download the CSV report from the Jenkins box and send to the CCS Category team for review. It’s up to CCS to decide on any actions (e.g. disqualifying the supplier from the framework completely), however they may provide us with a list of suppliers or services to disable prior to the framework going live.

  • Finally run from our scripts repository to generate a separate list of all applicants including contact details from their declaration. This can be used by CCS to get in touch with suppliers to clarify things about their application when necessary.

We then wait for CCS to decide who passes and fails the discretionary applications, and which automatically-failed suppliers get on to the framework after all.

Re-assessing failed suppliers

CCS will supply us with a list of suppliers who should be on the framework after all, who need to be updated. This process differs depending on if the framework’s services are assessed against a schema (like we do for DOS) or not (like G-Cloud).


You may want to check that the number of supplier IDs in the final pass/fail lists match with the total produced from the export script initially; while usually things are fine we have had issues before where some suppliers have been missed out and then they are unable to see their application status.

  • Run the script to set those suppliers provided by CCS as on_framework. The suppliers declaration or services are not changed, as the will have already set their draft services statuses to submitted.

Previously, DOS services were also assessed via a pass/fail schema. They are now validated during the supplier application period - a supplier can’t create a service that would ‘fail’. So the re-assessment process is now the same for G-Cloud and DOS.

Generating framework agreements

  • Make sure the relevant framework document templates are in the agreements repository and that they have been signed off by CCS.

  • Set the framework agreement details in the database, if this hasn’t been done already:

    curl -H "Authorization: Bearer <API_TOKEN>" -H "Content-Type: application/json" -X POST -d
            "frameworkAgreementDetails": {
               "contractNoticeNumber": "RM1557xxiii-v1.2-12-13-2525",
               "countersignerName": "Joe Bloggs",
               "countersignerRole": "Director - Technology",
               "frameworkAgreementVersion": "RM1557xxiii",
               "frameworkExtensionLength": "12 months",
               "frameworkRefDate": "32-13-2525",
               "frameworkURL": "",
               "lotDescriptions": {
                 "cloud-hosting": "Lot 1: Cloud hosting",
                 "cloud-software": "Lot 2: Cloud software",
                 "cloud-support": "Lot 3: Cloud support"
               "lotOrder": [
               "variations": {
    }' https://<API_ENDPOINT>/frameworks/<FRAMEWORK_SLUG>

These framework agreement details are required to generate the contracts that suppliers will sign if they are accepted onto the framework. Some of this information will not be available when the framework is added to the database - triple check all the values with Crown Commercial Service (CCS), especially frameworkRefDate (usually the date of the last day of standstill but can vary).


If a supplier has made a mistake on their agreement details, they can re-sign, see: Supplier has made a mistake when signing framework agreement.

Uploading framework agreements/ ‘fail letters’

  • Run to upload your local folder of freshly generated agreement PDFs to the agreements bucket, ready for the beginning of standstill. Run --help before if you want more details on what the script does. These should end up at:

  • CCS must supply result letters for all suppliers who failed to make it onto the framework. These must be PDFs uploaded to the agreements bucket with the following filename and path format:


    These can also be uploaded using the script and should be done before the beginning of standstill.

Any documents submitted by suppliers as part of the application are automatically scanned for viruses via the Antivirus API.

Remember to use the production-developers AWS account when running the upload scripts.

4 - standstill

Results are made available to suppliers at the beginning of standstill, along with signature pages to sign for suppliers awarded onto the framework. This is also known ‘Intention to Award’ (the actual ‘awarding’ of the framework happens at the end of standstill).

During standstill, buyers will see banner messages on the direct award project saved search page (/buyers/direct-award/g-cloud/projects/<project_id>), explaining how saved search behaves during the transition between frameworks. This content relies on both frameworks repository metadata and framework dates in the database.

  • Ensure that following_framework.yml exists in the frameworks repository metadata folder for the old framework, and that it contains slug: <following-framework-slug>. For G9 for example, it would contain slug: g-cloud-10.

  • Ensure that the frameworkLiveAtUTC is set correctly in the database, for the incoming framework

  • The framework status can then be set to standstill using:

    curl -H "Authorization: Bearer <API_TOKEN>" -H "Content-Type: application/json" -X POST -d
         "frameworks": {
             "status": "standstill"
    }' https://<API_ENDPOINT>/frameworks/<FRAMEWORK_SLUG>
  • Finalised versions of the framework agreement documents must now be published on GOV.UK by the framework product manager / content designer.

  • Ensure the links to these documents in the frameworks repository at frameworks/<frameworks_slug>/messages/urls.yml are correct. Suppliers will see these links when we email them in the next step.

  • Run from the scripts repository to email all successful suppliers with their result. The email also contains a link to log in and sign their framework agreement. Once logged in, they can also view links to the published documents on GOV.UK.

Unsuccessful suppliers will be contacted directly by CCS with their result, so as long as we have already uploaded the PDF result letters (see ‘fail letters’ in the previous step) there is nothing more to do for them.

Preparing services to go live

Framework services can now be migrated from drafts to “real” services. This step is slightly different for G-Cloud and Digital Outcomes and Specialists.



This step took approximately 10 hours for G-Cloud 11’s 31000 services. While some improvements have been made for G-Cloud 12, it’s recommended to start this task at least the morning before the end of standstill.

  • For preview only, you need to either:

    • work out how to copy all the submission files in the S3 bucket from production to preview (and document it here); or

    • abandon this process and do it in production instead. Then copy the production database with published services back to preview.

  • Disconnect the destination documents bucket from the antivirus SNS topic so that the Antivirus API won’t get overwhelmed by the volume of documents being uploaded during the following process. This can be done by altering the terraform and applying this temporary configuration. Being disconnected from real-time scanning triggers shouldn’t prevent catch-up jobs from attempting to scan these new files at a more controlled pace overnight, so there shouldn’t be any concern about ending up with unscanned files in the bucket.

  • Run the Publish draft services Jenkins job. Make sure to specify the appropriate AWS account account for the stage (production for Production and Staging, development for Preview). Otherwise there will be permissions woes when a user/admin later tries to update their documents and the webapp finds itself unable to overwrite the file (more information on this is available in the docstring of the script). This job will:

    • copy submitted draft services from “on framework” suppliers into the services table with a new service ID

    • update the draft services table with the new service IDs and leave the drafts there

    • copy the documents from the submissions bucket into the live documents bucket with an appropriate filename and make them readable by everyone

    • update the service data with the new public document URLs

    • drafts for unsuccessful suppliers are left unaltered in the database

  • Publishing services takes several hours! Be patient and check the Jenkins console log for any failures. The script can be re-run manually, optionally supplying a file containing draft service IDs that failed the first time around. Check the script docstring for details.

  • Run oneoff/ to acknowledge the (several thousand!) audit events that have just been created during the service publishing. This will stop these appearing as changes for the CCS Category team to approve in the Admin.

  • Run a script to suspend all services of suppliers who have not signed the framework agreement, see: Suspending suppliers without an agreement.

At this point, check:

  • that the new services are not yet visible on the marketplace

  • that the public document URLs work

  • that the documents show the intended owner account when viewed in the AWS console

  • that the count of services on the new framework matches the count of submitted drafts from “on framework” suppliers

Once G-Cloud services are migrated:

  • Check that the search mapping for the new framework has been committed to the Search API mappings folder (it should be named something like services-g-things-23.json) and that the commit has been released to production. See the Search API README for more details.

  • Run the Create index - <stage> Jenkins job with the new mapping, to create and populate the new index. Do not name the index as any known framework family or framework slug! It should be timestamped with the current date, eg g-things-23-2019-06-31.

  • Re-enable the antivirus SNS topic for the documents bucket.

Digital Outcomes and Specialists

DOS services are not visible on the Digital Marketplace, and do not need to be indexed or scanned. There’s just one step to do:

  • Run the script, found under the framework-applications/ directory of the scripts repo.

For DOS there are no documents to be transferred between buckets, so the publishing script will be much quicker than G-Cloud (around 25 minutes for DOS4), and will not need any bucket names to be supplied as arguments. This also means there are no extra audit events, so we don’t need to run the script.

5 - live

This step differs for G-Cloud and Digital Outcomes and Specialists.

Digital Outcomes and Specialists

To make a new Digital Outcomes and Specialists framework live:

  • Run the export-dos-*.py scripts from the scripts repository to generate new CSVs of services for each lot.

  • Check the CSVs with a product manager/framework owner, who can decide what data should be included/changed. In the past we have manually cleaned (not all columns of data are needed), renamed columns and checked the correctness of the data. The final CSV should match the formatting of the current version.

  • Upload the CSVs for each lot to the S3 bucket, following the naming conventions for previous framework iterations:

  • Using the S3 GUI, make the files available to be read by the public, and update the metadata for the new files to set the Content-disposition to attachment.

  • Update the API’s DM_FRAMEWORK_TO_ES_INDEX config setting and release the app to production, so that any buyer edits (such as clarification questions and answers) to briefs are immediately re-indexed. (Note that this config is in the Data API, not the Search API!)

  • Check for any hardcoded instances of the framework slug in the frontend templates (e.g. Supplier FE dashboard, Admin FE home page) and make sure these are updated (or better yet, un-harcode them!)

  • Use the DOS-specific API endpoint to set the framework live. The endpoint will update the incoming frameworks status to live and set the outgoing frameworks status to expired. It will also migrate any existing drafts for the outgoing framework to be associated with the incoming framework. The transition between DOS frameworks for a buyer should be relatively transparent and we want them to not lose their existing drafts. Either use the API client, or use:

    curl -H "Authorization: Bearer <API_TOKEN>" -H "Content-Type: application/json" -X POST -d
        "updated_by": "",
        "expiringFramework": <EXPIRING_FRAMEWORK_SLUG>
    }' https://<API_ENDPOINT>/frameworks/transition-dos/<INCOMING_FRAMEWORK>
  • Create new mailing lists for each lot in Mailchimp (see Creating new lists in Mailchimp) and update the and scripts with the new list IDs.

  • Update the following scripts, test suites and Jenkins jobs/variables to include the new DOS framework. This list is probably not exhaustive so you should double check:

    • Scripts:


    • Jenkins jobs and variables:
      • export_data.yml

      • export_supplier_data_to_s3.yml

      • generate_upload_and_notify_counterpart_signature_pages.yml

      • notify_suppliers_of_dos_opportunities.yml (Note the last email for the old iteration should be sent the morning of the switchover, to alert suppliers of the previous day’s opportunities)

      • upload_dos_opportunities_email_list.yml

      • digitalmarketplace-jenkins/playbooks/roles/jenkins/defaults/main.yml

    • Tests:
      • Functional tests

      • Visual regression tests


To make a new G-Cloud framework live:

  • Create an alias for the new index matching the latest live framework slug (e.g. g-things-22) on the datestamped index that includes the new services. Use the Update index alias - <STAGE> job on Jenkins. Check that the number of docs on the Search API /_status endpoint matches the number of expected live services and that the alias is present on the index.

  • Update the following jobs on Jenkins:
    • export_supplier_data_to_s3.yml

    • generate_upload_and_notify_counterpart_signature_pages

  • Update the Jenkins search_config to reference the correct framework(s) and apply the changes with:

    make jenkins TAGS=config

    This config will feed into the nightly Update index - <STAGE> jobs, and the Clean and apply database dump - <STAGE> job. Note that Jenkins may need to restart following this update.

  • Add the new g-things-23 index to the API’s DM_FRAMEWORK_TO_ES_INDEX config setting and release the app to production, so that any supplier edits to live services are immediately re-indexed. (Note that this config is in the Data API, not the Search API!)


Triple check everything’s ready with product and comms. Let’s go live!

  • Set the framework status to live using:

    curl -H "Authorization: Bearer <API_TOKEN>" -H "Content-Type: application/json" -X POST -d
        "frameworks": {
     }' https://<API_ENDPOINT>/frameworks/<FRAMEWORK_SLUG>
  • Check that the new service pages are displayed on the Digital Marketplace and can be found using search.

  • Pat yourself on the back - you’re live!

6 - expired

G-Cloud services on the framework are still visible on the Digital Marketplace but show a banner explaining that the framework has expired and the service can no longer be procured. Services should no longer be returned by search - the service pages are only visible by direct navigation to them.

Set the G-Cloud framework status to expired using:

curl -H "Authorization: Bearer <API_TOKEN>" -H "Content-Type: application/json" -X POST -d
     "frameworks": {
         "status": "expired"
}' https://<API_ENDPOINT>/frameworks/g-things-22

DOS frameworks are automatically expired when the new iteration is made live (see above) and do not need to be manually set to expired.