Kuma Documentation Release latest
Mozilla
October 30, 2020
Contents
1 Development 3
2 Getting started 5 2.1 Installation...... 5 2.2 Development...... 10 2.3 Troubleshooting...... 24 2.4 The Kuma test suite...... 26 2.5 Client-side testing...... 28 2.6 Feature toggles...... 30 2.7 Celery and async tasks...... 33 2.8 Search...... 35 2.9 Localization...... 36 2.10 CKEditor...... 39 2.11 Getting Data...... 41 2.12 Documentation...... 45 2.13 Docker...... 45 2.14 Deploying MDN...... 46 2.15 Rendering Documents...... 50 2.16 Building, collecting, and serving assets...... 63
i ii Kuma Documentation, Release latest
Kuma is the platform that powers MDN (developer.mozilla.org)
Contents 1 Kuma Documentation, Release latest
2 Contents CHAPTER 1
Development
Code https://github.com/mdn/kuma Issues P1 Bugs (to be fixed in current or next sprint) P2 Bugs (to be fixed in 180 days) All developer.mozilla.org bugs Pull Request Queues Dev Docs https://kuma.readthedocs.io/en/latest/installation.html CI Server https://travis-ci.com/mdn/kuma Forum https://discourse.mozilla.org/c/mdn Matrix #mdn room Servers What’s Deployed on MDN? https://developer.allizom.org/ (stage) https://developer.mozilla.org/ (prod)
3 Kuma Documentation, Release latest
4 Chapter 1. Development CHAPTER 2
Getting started
Want to help make MDN great? Our contribution guide lists some good first projects and offers direction on submitting code. Contents:
2.1 Installation
Kuma uses Docker for local development and integration testing, and we are transitioning to Docker containers for deployment as well. Current Status of Dockerization: • Kuma developers are using Docker for daily development and maintenance tasks. Staff developers primarily use Docker for Mac. Other staff members and contributors use Docker’s Ubuntu packages. • The development environment can use a lot of resources. On Docker for Mac, the environment runs well with 6 CPUs and 10 GB of memory dedicated to Docker. It can be run successfully on 2 CPUs and 2 GB of memory. • The Docker development environment is evolving rapidly, to address issues found during development and to move toward a containerized design. You may need to regularly reset your environment to get the current changes. • The Docker development environment doesn’t fully support a ‘production-like’ environment. For example, we don’t have a documented configuration for running with an SSL connection. • When the master branch is updated, the kuma_base image is refreshed and published to DockerHub. This image contains system packages and third-party libraries. • Our TravisCI builds include a target that build Docker containers and runs the tests inside. • Our Jenkins server builds and publishes Docker images, and runs integration tests using Docker. • We are documenting tips and tricks on the Troubleshooting page. • Feel free to ask for help on Matrix at #mdn or on discourse.
2.1.1 Docker setup
1. Install the Docker platform, following Docker’s instructions for your operating system, such as Docker for Mac for MacOS, or for your Linux distribution. Non-Linux users should increase Docker’s memory limits (Windows, macOS) to at least 4 GB, as the default of 2 GB is insufficient.
5 Kuma Documentation, Release latest
Linux users will also want to install Docker Compose and follow post-install instructions to confirm that the development user can run Docker commmands. To confirm that Docker is installed correctly, run: docker run hello-world
If you find any error using docker commands without sudo visit using docker as non-root user. 2. Clone the kuma Git repository, if you haven’t already: git clone --recursive https://github.com/mdn/kuma.git
If you think you might be submitting pull requests, consider forking the repository first, and then cloning your fork of it. 3. Ensure you are in the existing or newly cloned kuma working copy: cd kuma
4. Initialize and customize .env: cp.env-dist.dev.env vim.env # Or your favorite editor
Linux users should set the UID parameter in .env (i.e. change #UID=1000 to UID=1000) to avoid file permission issues when mixing docker-compose and docker commands. MacOS users do not need to change any of the defaults to get started. Note that there are settings in this file that can be useful when debug- ging, however. 5. Pull the Docker images and build the containers: docker-compose pull docker-compose build
(The build command is effectively a no-op at this point because the pull command just downloaded pre-built docker images.) 6. Start the containers in the background: docker-compose up -d
2.1.2 Load the sample database
Download the sample database with either of the following wget or curl (installed by default on macOS) commands: wget -N https://mdn-downloads.s3-us-west-2.amazonaws.com/mdn_sample_db.sql.gz curl -O https://mdn-downloads.s3-us-west-2.amazonaws.com/mdn_sample_db.sql.gz
Next, upload that sample database into the Kuma web container with: docker-compose exec web bash -c "zcat mdn_sample_db.sql.gz | ./manage.py dbshell"
(This command can be adjusted to restore from an uncompressed database, or directly from a mysqldump command.) Then run the following command: docker-compose exec web ./manage.py migrate
This will ensure the sample database is in sync with your version of Kuma.
6 Chapter 2. Getting started Kuma Documentation, Release latest
2.1.3 Compile locales
Localized string databases are included in their source form, and need to be compiled to their binary form: docker-compose exec web make localecompile
Dozens of lines of warnings will be printed: cd locale; ./compile-mo.sh . ./af/LC_MESSAGES/django.po:2: warning: header field 'PO-Revision-Date' still has the initial default value ./af/LC_MESSAGES/django.po:2: warning: header field 'Last-Translator' still has the initial default value ... ./zu/LC_MESSAGES/javascript.po:2: warning: header field 'PO-Revision-Date' still has the initial default value ./zu/LC_MESSAGES/javascript.po:2: warning: header field 'Last-Translator' still has the initial default value
Warnings are OK, and will be fixed as translators update the strings on Pontoon. If there is an error, the output will end with the error, such as: ./az/LC_MESSAGES/django.po:263: 'msgid' and 'msgstr' entries do not both end with '\n' msgfmt: found 1 fatal error
These need to be fixed by a Kuma developer. Notify them in the #mdn Matrix room or open a bug. You can continue with installation, but non-English locales will not be localized.
2.1.4 Generate static assets
Static assets such as CSS and JS are included in source form, and need to be compiled to their final form: docker-compose exec web make build-static
A few thousand lines will be printed, like: ## Generating JavaScript translation catalogs ## processing language en_US processing language af processing language ar ... ## Compiling (Sass), collecting, and building static files ## Copying '/app/kuma/static/img/embed/promos/survey.svg' Copying '/app/kuma/static/styles/components/home/column-callout.scss' Copying '/app/build/locale/jsi18n/fy-NL/javascript.js' ... Post-processed 'build/styles/editor-locale-ar.css' as 'build/styles/editor-locale-ar.css' Post-processed 'build/styles/locale-ln.css' as 'build/styles/locale-ln.css' Post-processed 'build/styles/editor-locale-pt-BR.css' as 'build/styles/editor-locale-pt-BR.css' .... 1870 static files copied to '/app/static', 125 post-processed.
2.1.5 Visit the homepage
Open the homepage at http://localhost.org:8000 . You’ve installed Kuma!
2.1.6 Create an admin user
Many Kuma settings require access to the Django admin, including configuring social login. It is useful to create an admin account with password access for local development.
2.1. Installation 7 Kuma Documentation, Release latest
If you want to create a new admin account, use createsuperuser: docker-compose exec web ./manage.py createsuperuser
This will prompt you for a username, email address (a fake address like [email protected] will work), and a password. If your database has an existing account that you want to use, run the management command. Replace YOUR_USERNAME with your username and YOUR_PASSWORD with your password: docker-compose run --rm web ./manage.py ihavepower YOUR_USERNAME \ --password YOUR_PASSWORD
With a password-enabled admin account, you can log into Django admin at http://localhost.org:8000/admin/login
2.1.7 Update the Sites section
1. After logging in to the Django admin (an alternative is using the login test-super with password test-password), scroll down to the Sites section. 2. Click on “Change”. 3. Click on the entry that says localhost:8000. 4. Change both the domain and display name from localhost:8000 to localhost.org:8000. 5. Click “Save”.
2.1.8 Enable GitHub/Google authentication (optional)
Since Google’s OAuth requires us to use a valid top-level-domain, we’re going to use http://localhost.org:8000 as an example for every URL here. To automate setting Django up for social auth you can run docker-compose exec web ./manage.py configure_social_auth and follow its steps (and ignore the rest of this section). If you want to do it manually, follow these steps: To enable GitHub authentication, you’ll need to register an OAuth application on GitHub, with settings like: • Application name: MDN Development for (
8 Chapter 2. Getting started Kuma Documentation, Release latest
• Client id:
Now you can sign in with GitHub. To associate your password-only admin account with GitHub: 1. Login with your password at http://localhost.org:8000/admin/login. 2. Go to the Homepage at https://developer.mozilla.org/en-US/. 3. Click your username at the top to view your profile. 4. Click Edit to edit your profile. 5. Under My Profiles, click Use your GitHub account to sign in. To create a new account with GitHub, use the regular “Sign in” widget at the top of any page. With social accounts are enabled, you can disable the admin password in the Django shell: docker-compose exec web ./manage.py shell_plus >>> me = User.objects.get(username='admin_username') >>> me.set_unusable_password() >>> me.save() >>> exit()
2.1.9 Enable Stripe payments (optional)
1. Go to https://dashboard.stripe.com and create a Stripe account (if you don’t have one already). 2. Go to https://dashboard.stripe.com/apikeys and copy both the publishable and secret key into your .env file. The respective config keys are STRIPE_PUBLIC_KEY and STRIPE_SECRET_KEY. 3. Go to https://dashboard.stripe.com/test/subscriptions/products and create a new product and plan. 4. Once created copy the plan ID and also put it into .env as STRIPE_PLAN_ID. Unless you set a custom ID it should start with plan_. If you’re using Stripe in testing mode you can also get test numbers from this site: https://stripe.com/docs/testing#cards Testing Stripe’s hooks locally requires setting up a tunneling service, like ngrok (https://ngrok.com). You should then set CUSTOM_WEBHOOK_HOSTNAME to the hostname you get from your tunneling service, e.g. for ngrok it might be https://203ebfab.ngrok.io After kuma has started you will have a webhook configured in stripe. You can view it on Stripe’s dashboard: https://dashboard.stripe.com/test/webhooks Note that with the free tier a restart of ngrok gives you a new hostname, so you’ll have to change the config again and restart the server (or manually change the webhook in Stripe’s dashboard).
2.1. Installation 9 Kuma Documentation, Release latest
2.1.10 Enable Sendinblue email integration
1. Create a Sendinblue account over at https://www.sendinblue.com (you can skip a lot of the profile set-up, look for skip in the upper right). 2. Get your v3 API key at https://account.sendinblue.com/advanced/api 3. Create a list at https://my.sendinblue.com/lists/new/parent_id/1 4. Add the sendinblue config keys to your .env, the keynames are SENDINBLUE_API_KEY and SENDINBLUE_LIST_ID
2.1.11 Interact with the Docker containers
The current directory is mounted as the /app folder in the web and worker containers. Changes made to your local directory are usually reflected in the running containers. To force the issue, the containers for specified services can be restarted: docker-compose restart web worker
You can connect to a running container to run commands. For example, you can open an interactive shell in the web container: docker-compose exec web /bin/bash make bash # Same command, less typing
To view the logs generated by a container: docker-compose logs web
To continuously view logs from all containers: docker-compose logs -f
To stop the containers: docker-compose stop
If you have made changes to the .env or /etc/hosts file, it’s a good idea to run: docker-compose stop docker-compose up
For further information, see the Docker documentation, such as the Docker Overview and the documentation for your operating system. You can try Docker’s guided tutorials, and apply what you’ve learned on the Kuma Docker environment.
2.2 Development
2.2.1 Basic Docker usage
Edit files as usual on your host machine; the current directory is mounted via Docker host mounting at /app within various Kuma containers. Useful docker sub-commands:
10 Chapter 2. Getting started Kuma Documentation, Release latest
docker-compose exec web bash # Start an interactive shell docker-compose logs web # View logs from the web container docker-compose logs -f # Continuously view logs from all containers docker-compose restart web # Force a container to reload docker-compose stop # Shutdown the containers docker-compose up -d # Start the containers docker-compose rm # Destroy the containers
There are make shortcuts on the host for frequent commands, such as: make up # docker-compose up -d make bash # docker-compose exec web bash make shell_plus # docker-compose exec web ./manage.py shell_plus
Run all commands in this doc in the web service container after make bash.
2.2.2 Running Kuma
When the Docker container environment is started (make up or similar), all of the services are also started. The development instance is available at http://localhost:8000.
2.2.3 Running the tests
One way to confirm that everything is working, or to pinpoint what is broken, is to run the test suite.
Django tests
Run the Django test suite: make test
For more information, see the test documentation.
Functional tests
To run the functional tests, see Client-side Testing.
Kumascript tests
If you’re changing Kumascript, be sure to run its tests too. See https://github.com/mdn/kumascript.
2.2.4 Front-end development
Assets are processed in several steps by Django and django-pipeline: • Front-end localization libraries and string catalogs are generated based on gettext calls in Javascript files. • Sass source files are compiled to plain CSS with node-sass. • Assets are collected to the static/ folder. • CSS, JS, and image assets are included in templates using the {% stylesheet %}, {% javascript %}, and {% static %} Jinja2 macros.
2.2. Development 11 Kuma Documentation, Release latest
In production mode (DEBUG=False), there is additional processing. See Generating production assets for more information. To rebuild the front-end localization libraries: make compilejsi18n
To rebuild CSS, JS, and image assets: make collectstatic
To do both: make build-static
Compiling JS on the host system with webpack
For a quicker iteration cycle while developing the frontend app you can run: npm i npm run webpack:dev
This watches the react frontend and rebuilds both the web and SSR bundle when changes occur. It rebuilds only what has changed and also restarts the SSR server. It serves React in development mode which yields more explicit warnings and allows you to use tools such as the React DevTools. You can also run it in production mode: npm run webpack:prod
Style guide and linters
There is an evolving style guide at https://mdn.github.io/mdn-fiori/, sourced from https://github.com/mdn/mdn-fiori. Some of the style guidelines are enforced by linters. To run stylelint on all .scss files: npm run stylelint
To run eslint on .js files: npm run eslint
Bundle Analyze
To get an idea about the size distribution of our react codebase you can run: npm run webpack:analyze
This will open an interactively explorable report in your browser.
Add a React page
Let’s say we are adding a new Example page to the dashboards section on Kuma. We’d like the pathname to be /dashboards/example/. First we need to set up a few things in Django:
12 Chapter 2. Getting started Kuma Documentation, Release latest
1. In kuma/dashboards/jinja2/dashboards/, create a new Jinja template that our view will render called example.html. Paste the following code in example.html to get started: {% extends "react_base.html" %}
{% block title %}{{ page_title(_('Example Page')) }}{% endblock %}
{% block content %}This works!{% endblock %}
{# This overrides the Jinja footer, since we will be using the React footer instead. Without this line, two footers will render. #} {% block footer %}{% endblock %}
2. In kuma/dashboards/views.py, render the Jinja template we just created: def example(request): return render(request,"dashboards/example.html")
3. In kuma/dashboards/urls.py, add a new url that will render our view. This code will change slightly depending on your url and file structure: # Import the view
# (This next line most likely will already be there, if not, add it to the top of the file.) from. import views
...
re_path(r"^example/$", views.example, name="dashboards.example"),
4. We are done with the Django part! In your browser, go to http://wiki.localhost.org:8000/en- US/dashboards/example/ and you should see a page that says “This works!” 5. Now we can set up the React part. In kuma/dashboards/jinja2/dashboards/example.html, we can remove the “This works!” text and add the following code between {% block title %} and {% block content %}: {# Pass `True` for initial data if you do not have any data to fetch. Otherwise, change `True` to `None` or whatever your initial data will be. #} {% block document_head %} {{ render_react("SPA", request.LANGUAGE_CODE, request.get_full_path(), True)|safe }} {% endblock %}
6. So your kuma/dashboards/jinja2/dashboards/example.html file should now look something like this: {% extends "react_base.html" %}
{% block title %}{{ page_title(_('Example Page')) }}{% endblock %}
{# Pass `True` for initial data if you do not have any data to fetch (i.e. static page). #} {# Otherwise, change `True` to `None` or whatever your initial data will be. #} {% block document_head %} {{ render_react("SPA", request.LANGUAGE_CODE, request.get_full_path(), True)|safe }} {% endblock %}
{% block content %} {% endblock %}
{# This overrides the Jinja footer, since we will be using the React footer instead. Without this line, two footers will render. #} {% block footer %}{% endblock %}
2.2. Development 13 Kuma Documentation, Release latest
7. In kuma/javascript/src, if there is not a kuma/javascript/dashboards folder already, add a folder called dashboards. 8. In kuma/javascript/src/dashboards, create a file called example.jsx. Add the following lines as a starter: // @flow import * as React from 'react';
import { gettext } from '../l10n.js'; import A11yNav from '../a11y/a11y-nav.jsx'; import Header from '../header/header.jsx'; import Footer from '../footer.jsx'; import Route from '../route.js';
type ExampleRouteParams = { locale: string };
export default function ExamplePage() { return ( <>
const BASEURL = typeof window !== 'undefined' && window.location ? window.location.origin : 'http://ssr.hack';
export class ExampleRoute extends Route
constructor(locale: string) { super(); this.locale = locale; }
getComponent() { return ExamplePage; }
match(url: string): ?ExampleRouteParams { const path = new URL(url, BASEURL).pathname; const examplePath = `/${this.locale}/dashboards/example/`; const regex = new RegExp(examplePath, 'g');
if (regex.test(path)) { return { locale: this.locale }; } return null; }
14 Chapter 2. Getting started Kuma Documentation, Release latest
}
9. We need to tell the single-page-app component about our new Route component, ExampleRoute. In kuma/javascript/src/single-page-app.jsx, at the top, add: import { ExampleRoute } from './dashboards/example.jsx';
10. A few lines down in the same file, where you see const routes = [ ... ], add ExampleRoute, so it should look like this: const routes = [ ... new ExampleRoute(locale) ];
11. We are done! Go to /dashboards/example/ in your browser and you should see a header, footer, and the words, “Hello!”
React page with initial data
If we have initial data we’d like to pass to our React component, we can do so by updating the 4th argument (document_data) in the render_react function call. This data will then be available as a prop on the Re- act side. For a real-life example, search for document_data in kuma/wiki/views/document.py. 1. Continuing our example from above, go back to kuma/dashboards/jinja2/dashboards/example.html. In the call to render_react, change True to your initial data. For our purposes, initial data will be {"content": "Goodbye!"}. The full line should look like:
{{ render_react("SPA", request.LANGUAGE_CODE, request.get_full_path(), {"content":"Goodbye!"})|safe }}
2. In kuma/javascript/src/dashboards/example.jsx, import the RouteComponentProps type by making the change below: import Route from '../route.js';
↑ to
import Route, { type RouteComponentProps } from '../route.js';
3. In the same file, update the ExamplePage function to accept a data parameter. The function declaration should now look like this: export default function ExamplePage({ data }: RouteComponentProps) { ... }
4. Now we can access the initial data from Step 1. Change {gettext("Hello!)} to {gettext(data.content)}. Refresh the page and the “Hello!” text should be replaced by “Goodbye!”
React page using fetch()
In some cases, you may choose to fetch data asynchronously when the page renders. 1. Continuing our example from above, go back to kuma/dashboards/jinja2/dashboards/example.html. In the call to render_react, change the 4th argument to None. The full line should look like:
2.2. Development 15 Kuma Documentation, Release latest
{{ render_react("SPA", request.LANGUAGE_CODE, request.get_full_path(), None)|safe }}
2. Refresh the page in the browser and you should now get an error. That is because we do not have a fetch() method yet, so let’s make one. In kuma/javascript/src/dashboards/example.jsx, ExampleRoute class, add this method (note: we are using /api/v1/whoami just to show a working ex- ample of an API call. Change this to reflect your API endpoint.): fetch(): Promise
}
3. Once the fetch completes, our data will be available as a data prop once again. In the ExamplePage function, replace the {gettext(data.content)} line with: {!data && gettext('Loading...')} {data && gettext(data.username)}
4. Your kuma/javascript/src/dashboards/example.jsx file should look like: // @flow import * as React from 'react';
import { gettext } from '../l10n.js'; import A11yNav from '../a11y/a11y-nav.jsx'; import Header from '../header/header.jsx'; import Footer from '../footer.jsx'; import Route, { type RouteComponentProps } from '../route.js';
type ExampleRouteParams = { locale: string };
export default function ExamplePage({ data }: RouteComponentProps) { return ( <>
const BASEURL = typeof window !== 'undefined' && window.location ? window.location.origin : 'http://ssr.hack';
export class ExampleRoute extends Route
constructor(locale: string) { super(); this.locale = locale; }
getComponent() {
16 Chapter 2. Getting started Kuma Documentation, Release latest
return ExamplePage; }
match(url: string): ?ExampleRouteParams { const path = new URL(url, BASEURL).pathname; const examplePath = `/${this.locale}/dashboards/example/`; const regex = new RegExp(examplePath, 'g');
if (regex.test(path)) { return { locale: this.locale }; } return null; }
fetch(): Promise
5. Refresh the page in the browser, and you should now see your username if you are logged in. Or just the header and footer if you are logged out.
2.2.5 Database migrations
Apps are migrated using Django’s migration system. To run the migrations: ./manage.py migrate
If your changes include schema modifications, see the Django documentation for the migration workflow.
2.2.6 Coding conventions
See CONTRIBUTING.md for details of the coding style on Kuma. New code is expected to have test coverage. See the Test Suite docs for tips on writing tests.
2.2.7 Managing dependencies
Python dependencies
Kuma uses Poetry for dependency management. Poetry is configured in the pyproject.toml file at the root of the repository, and exact versions of dependencies (along with hashes) are stored in the poetry.lock file. Please refer to the Poetry docs on adding and updating dependencies. A few examples: • Use poetry update to update and re-lock all dependencies to their latest compatible versions, according to constraints in pyproject.toml. • Use poetry update
2.2. Development 17 Kuma Documentation, Release latest
• To update a package to the very latest and not just what matches what’s currently in pyproject.toml, add @latest. For example poetry update pytz@latest. • Use poetry add
Using Poetry directly on your host computer is also fine; the resulting pyproject.toml and poetry.lock files should be the same either way.
Front-end asset dependencies
Front-end dependencies are managed by Bower and checked into the repository. Follow these steps to add or upgrade a dependency: 1. On the host, update bower.json. 2. Start a root Docker container shell docker-compose run -u root web bash 3.( Docker only) In the root container shell, run: apt-get update apt-get install -y git npm install -g bower-installer bower-installer
4. On the host, prepare the dependency to be committed (git add path/to/dependency). Front-end dependencies that are not already managed by Bower should begin using this approach the next time they’re upgraded.
Front-end toolchain dependencies
The Front-end toolchain dependencies are managed by yarn, but not checked in to the repository. Follow these steps to add a dependency: docker-compose exec web bash Once you’re inside the Docker container, run: yarn add my-needed-new-lib Exit the container and the check in the changes to package.json and yarn.lock. To upgrade a dependency: docker-compose exec web bash Once you’re inside the Docker container, run: yarn upgrade-interactive –latest
18 Chapter 2. Getting started Kuma Documentation, Release latest
Use the interactive prompt. Exit the container and the check in the changes to package.json and yarn.lock.
2.2.8 Customizing with environment variables
Environment variables are used to change the way different components work. There are a few ways to change an environment variables: • Exporting in the shell, such as: export DEBUG=True; ./manage.py runserver
• A one-time override, such as: DEBUG=True ./manage.py runserver
• Changing the environment list in docker-compose.yml. • Creating a .env file in the repository root directory. One variable you may wish to alter for local development is DEBUG_TOOLBAR, which, when set to True, will enable the Django Debug Toolbar: DEBUG_TOOLBAR=True
Note that enabling the Debug Toolbar can severely impact response time, adding around 4 seconds to page load time.
2.2.9 Customizing number of workers
The docker-compose.yml in git comes with a default setting of 4 celery workers and 4 gunicorn workers. That’s pretty resource intensive since they prefork. To change the number of gunicorn and celery workers, consider setting this in your .env file: CELERY_WORKERS=2 GUNICORN_WORKERS=3
In that example, it will only start 2 celery workers and 3 gunicorn workers just for your environment.
2.2.10 Customizing the Docker environment
Running docker-compose will create and run several containers, and each container’s environment and settings are configured in docker-compose.yml. The settings are “baked” into the containers created by docker-compose up. To override a container’s settings for development, use a local override file. For example, the web service runs in a container with the default command “gunicorn -w 4 --bind 0.0.0.0:8000 --timeout=120 kuma.wsgi:application”. (The container has a name that begins with kuma_web_1_ and ends with a string of random hex digits. You can look up the name of your particular container with docker ps | grep kuma_web. You’ll need this container name for some of the commands described below.) A useful alternative for debugging is to run a single-threaded process that loads the Werkzeug debugger on exceptions (see docs for run- server_plus), and that allows for stepping through the code with a debugger. To use this alternative, create an override file docker-compose.override.yml: version: "2.1" services: web:
2.2. Development 19 Kuma Documentation, Release latest
command: ./manage.py runserver_plus 0.0.0.0:8000 stdin_open: true tty: true
This is similar to “docker run -it
You can then add pdb breakpoints to the code (import pdb; pdb.set_trace) and connect to the debugger with: docker attach
A similar method can be used to override environment variables in containers, run additional services, or make other changes. See the docker-compose documentation for more ideas on customizing the Docker environment.
2.2.11 Customizing the database
The database connection is defined by the environment variable DATABASE_URL, with this default: DATABASE_URL=mysql://root:kuma@mysql:3306/developer_mozilla_org
The format is defined by the dj-database-url project: DATABASE_URL=mysql://user:password@host:port/database
If you configure a new database, override DATABASE_URL to connect to it. To add an empty schema to a freshly created database: ./manage.py migrate
To connect to the database specified in DATABASE_URL, use: ./manage.py dbshell
2.2.12 Generating production assets
Setting DEBUG=False will put you in production mode, which adds aditional asset processing: • Javascript modules are combined into single JS files. • CSS and JS files are minifed and post-processed by cleancss and UglifyJS. • Assets are renamed to include a hash of contents by a variant of Django’s ManifestStaticFilesStorage. In production mode, assets and their hashes are read once when the server starts, for efficiency. Any changes to assets require rebuilding with make build-static and restarting the web process. To emulate production, and test compressed and hashed assets locally: 1. Set the environment variable DEBUG=False 2. Start (docker-compose up -d) your Docker services. 3. Run docker-compose run --rm -e DJANGO_SETTINGS_MODULE=kuma.settings.prod web make build-static. 4. Restart the web process using docker-compose restart web.
20 Chapter 2. Getting started Kuma Documentation, Release latest
2.2.13 Using secure cookies
To prevent error messages like “Forbidden (CSRF cookie not set.):”, set the environment variable: CSRF_COOKIE_SECURE= false
This is the default in Docker, which does not support local development with HTTPS.
2.2.14 Maintenance mode
Maintenance mode is a special configuration for running Kuma in read-only mode, where all operations that would write to the database are blocked. As the name suggests, it’s intended for those times when we’d like to continue to serve documents from a read-only copy of the database, while performing maintenance on the master database. For local Docker-based development in maintenance mode: 1. If you haven’t already, create a read-only user for your local MySQL database: docker-compose up -d docker-compose exec web mysql -h mysql -u root -p (when prompted for the password, enter "kuma") mysql> source ./scripts/create_read_only_user.sql mysql> quit
2. Create a .env file in the repository root directory, and add these settings: MAINTENANCE_MODE=True DATABASE_USER=kuma_ro
Using a read-only database user is not required in maintenance mode. You can run in maintenance mode just fine with only this setting: MAINTENANCE_MODE=True
and going with a database user that has write privileges. The read-only database user simply provides a level of safety as well as notification (for example, an exception will be raised if an attempt to write the database slips through). 3. Update your local Docker instance: docker-compose up -d
4. You may need to recompile your static assets and then restart: docker-compose exec web make build-static docker-compose restart web
You should be good to go! There is a set of integration tests for maintenance mode. If you’d like to run them against your local Docker instance, first do the following: 1. Load the latest sample database (see Load the sample database). 2. Ensure that the test document “en-US/docs/User:anonymous:uitest” has been rendered (all of its macros have been executed). You can check this by browsing to http://localhost:8000/en-US/docs/User:anonymous:uitest. If there is no message about un-rendered content, you are good to go. If there is a message about un-rendered content, you will have to put your local Docker instance back into non-maintenance mode, and render the document:
2.2. Development 21 Kuma Documentation, Release latest
• Configure your .env file for non-maintenance mode: MAINTENANCE_MODE=False DATABASE_USER=root
• docker-compose up -d • Using your browser, do a shift-reload on http://localhost:8000/en-US/docs/User:anonymous:uitest and then put your local Docker instance back in maintenance mode: • Configure your .env file for maintenance mode: MAINTENANCE_MODE=True DATABASE_USER=kuma_ro
• docker-compose up -d 3. Configure your environment with DEBUG=False because the maintenance-mode integration tests check for the non-debug version of the not-found page: DEBUG=False MAINTENANCE_MODE=True DATABASE_USER=kuma_ro
This, in turn, will also require you to recompile your static assets: docker-compose up -d docker-compose exec web ./manage.py compilejsi18n docker-compose exec web ./manage.py collectstatic docker-compose restart web
Now you should be ready for a successful test run: py.test --maintenance-mode -m "not search" tests/functional --base-url http://localhost:8000 --driver Chrome --driver-path /path/to/chromedriver
Note that the “search” tests are excluded. This is because the tests marked “search” are not currently designed to run against the sample database.
2.2.15 Serving over SSL / HTTPS
Kuma can be served over HTTPS locally with a self-signed certificate. Browsers consider self-signed certificates to be unsafe, and you’ll have to confirm that you want an exception for this. 1. If you want GitHub logins: • In the Django Admin for Sites, ensure that site #2’s domain is set to developer.127.0.0.1.nip.io. • In GitHub, generate a new GitHub OAuth app for the test SSL domain, modifying the procees at Up- date the Sites section. When creating the GitHub OAuth app, replace http://localhost:8000 with https://developer.127.0.0.1.nip.io in both URLs. When creating the SocialApp in Kuma, chose the developer.127.0.0.1.nip.io site. 2. Include the SSL containers by updating .env: COMPOSE_FILE=docker-compose.yml:docker-compose.ssl.yml
3. Run the new containers:
22 Chapter 2. Getting started Kuma Documentation, Release latest
docker-compose up -d
4. Load https://developer.127.0.0.1.nip.io/en-US/ in your browser, and add an exception for the self-signed certifi- cate. 5. Load https://demos.developer.127.0.0.1.nip.io/en-US/ in your browser, and add an exception for the self-signed certificate again. Some features of SSL-protected sites may not be available, because the browser does not fully trust the self-signed SSL certificate. The HTTP-only website will still be available at http://localhost:8000/en-US/, but GitHub logins will not work.
2.2.16 Enabling PYTHONWARNINGS
Python ignores some warnings by default, including DeprecationWarning. To see these warnings, you can set the PYTHONWARNINGS environment variable in your .env file. For example: # Show every warning, every time it occurs PYTHONWARNINGS=always
Or alternatively: # Show every warning, but ignore repeats PYTHONWARNINGS=default
Note: Explicitly setting PYTHONWARNINGS=default will not do what you expect. It actually disables the default filters, ensuring that every warning gets displayed, but only the first time it occurs on a given line. See the PYTHONWARNINGS docs for more information on possible values.
2.2.17 Configuring AWS S3
The publish and unpublish Celery tasks and Django management commands require AWS S3 to be configured in order for them to do any real work, that is, creating/updating/deleting S3 objects used by the stage/production document API. In stage and production, the S3 bucket name as well as the AWS credentials are configured via the container environment, which in turn, gets the AWS credentials from a Kubernetes secrets resource. For local development, there is no need for any of this configuration. The publish and unpublish tasks will simply be skipped (although, for verification/debugging purposes, you can see the detailed skip messages in the worker log ( docker-compose logs -f worker). However, if for testing purposes you’d like to locally configure the publish and unpublish tasks to use S3, you can simply add the following to your .env file: MDN_API_S3_BUCKET_NAME=
2.2.18 Enabling django-querycount
If you want to find out how many SQL queries are made, per request, even if they are XHR requests, you can simply add this to your .env file: ENABLE_QUERYCOUNT=true
2.2. Development 23 Kuma Documentation, Release latest
Stop and start docker-compose and now, on stdout, it will print a table for every request URL about how many queries that involved and some information about how many of them were duplicates. If you want more insight into the duplicate queries add this to your .env: QUERYCOUNT_DISPLAY_DUPLICATES=3
A number greater than the (default) 0 means it will print the 3 most repeated SQL queries.
2.3 Troubleshooting
Kuma has many components. Even core developers need reminders of how to keep them all working together. This doc outlines some problems and potential solutions running Kuma.
2.3.1 Kuma “Reset”
These commands will reset your environment to a “fresh” version, with the current third-party libraries, while retaining the database: cd /path/to/kuma docker-compose down make clean git submodule sync --recursive && git submodule update --init --recursive docker-compose pull docker-compose build --pull docker-compose up
2.3.2 Reset a corrupt database
The Kuma database can become corrupted if the system runs out of disk space, or is unexpectedly shutdown. MySQL can repair some issues, but sometimes you have to start from scratch. When this happens, pass an extra argument --volumes to down: cd /path/to/kuma docker-compose down --volumes make clean git submodule sync --recursive && git submodule update --init --recursive docker-compose pull docker-compose build --pull docker-compose up -d mysql sleep 20 # Wait for MySQL to initialize. See notes below docker-compose up
The --volumes flag will remove the named MySQL database volume, which will be recreated when you run docker-compose up. The mysql container will take longer to start up as it recreates an empty database, and the kuma container will fail until the mysql container is ready for connections. The 20 second sleep should be sufficient, but if it is not, you may need to cancel and run docker-compose up again. Once mysql is ready for connections, follow the installation instructions, starting at Provisioning a database, to configure your now empty database.
24 Chapter 2. Getting started Kuma Documentation, Release latest
2.3.3 Run alternate services
Docker services run as containers. To change the commands or environments of services, it is easiest to add an override configuration file, as documented in Customizing the Docker environment.
2.3.4 Linux file permissions
On Linux, it is common that files created inside a Docker container are owned by the root user on the host system. This can cause problems when trying to work with them after creation. We are investigating solutions to create files as the developer’s user. In some cases, you can specify the host user ID when running commands: docker-compose run --rm --user $(id -u) web ./manage.py collectstatic
In other cases, the command requires root permissions inside the container, and this trick can’t be used. Another option is to allow the files to be created as root, and then change them to your user on the host system: find . -user root -exec sudo chown $(id -u):$(id -g) \{\} \;
2.3.5 KumaScript macros are not evaluating
If you’re seeing tags like {{HTMLSidebar}} or {{HTMLElement("head")}} it could be happening because there is an outdated macro that needs to be removed. For example on /en-US/docs/Web/HTML, there is a deleted macro called CommunityBox. To fix this, log in to edit the page, remove the CommunityBox macro, then click “Publish”. Visit the affected page again and you should see actual content instead of the macros. Note: Sometimes the wiki site (e.g. http://wiki.localhost.org:8000/en-US/docs/Web/HTML$edit) will throw an error after editing the page, acting as if it didn’t save your edit. View the actual URL of the page (e.g. http://localhost.org:8000/en-US/docs/Web/HTML) to verify that the changes were accepted.
2.3.6 Fixing a 404 error on a specific page
This most likely happens because the document isn’t in the database yet, so it needs to be scraped. Run the command ./manage.py scrape_document [url of document], e.g.: docker-compose exec web ./manage.py scrape_document https://developer.mozilla.org/en-US/docs/Glossary/HTML
Memorize or have handy the ./manage.py scrape_document command since this will come up often. See Add an MDN Wiki Document for more details.
2.3.7 Getting more help
Check if there is anything helpful in the logs: docker-compose logs web kumascript docker-compose logs web
If you have more problems running Kuma, please: 1. Paste errors to pastebin.
2.3. Troubleshooting 25 Kuma Documentation, Release latest
2. Start a thread on Discourse. 3. After you email dev-mdn, you can also ask in the #mdn-dev Slack channel.
2.4 The Kuma test suite
Kuma has a fairly comprehensive Python test suite. Changes should not break tests. Only change a test if there is a good reason to change the expected behavior. New code should come with tests. Commands should be run inside the development environment, after make bash.
2.4.1 Setup
Before you run the tests, you have to build assets: make build-static
2.4.2 Running the test suite
If you followed the steps in the installation docs, then all you should need to do to run the test suite is: make test
The default options for running the test are in pytest.ini. This is a good set of defaults. If you ever need to change the defaults, you can do so at the command line by running what the Make task does behind the scenes: py.test kuma
Some helpful command line arguments to py.test (won’t work on make test): --pdb: Drop into pdb on test failure. --create-db: Create a new test database. --showlocals: Shows local variables in tracebacks on errors. --exitfirst: Exits on the first failure. See py.test --help for more arguments.
Running subsets of tests and specific tests
There are a bunch of ways to specify a subset of tests to run: • only tests marked with the ‘spam’ marker: py.test -m spam
• all the tests but those marked with the ‘spam’ marker: py.test -m "not spam"
• all the tests but the ones in kuma/core:
26 Chapter 2. Getting started Kuma Documentation, Release latest
py.test --ignore kuma/core
• all the tests that have “foobar” in their names: py.test -k foobar
• all the tests that don’t have “foobar” in their names: py.test -k "not foobar"
• tests in a certain directory: py.test kuma/wiki/
• specific test: py.test kuma/wiki/tests/test_views.py::RedirectTests::test_redirects_only_internal
See http://pytest.org/latest/usage.html for more examples.
Showing test coverage
While running the tests you can record which part of the code base is covered by test cases. To show the results at the end of the test run use this command: make coveragetest
To generate an HTML coverage report, use: make coveragetesthtml
The test database
The test suite will create a new database named test_%s where %s is whatever value you have for settings.DATABASES[’default’][’NAME’]. Make sure the user has ALL on the test database as well.
2.4.3 Markers
See: py.test--markers
for the list of available markers. To add a marker, add it to the pytest.ini file. To use a marker, add a decorator to the class or function. Examples: import pytest
@pytest.mark.spam class SpamTests(TestCase): ...
class OtherSpamTests(TestCase): @pytest.mark.spam def test_something(self): ...
2.4. The Kuma test suite 27 Kuma Documentation, Release latest
2.4.4 Adding tests
Code should be written so that it can be tested, and then there should be tests for it. When adding code to an app, tests should be added in that app that cover the new functionality. All apps have a tests module where tests should go. They will be discovered automatically by the test runner as long as the look like a test.
2.4.5 Changing tests
Unless the current behavior, and thus the test that verifies that behavior is correct, is demonstrably wrong, don’t change tests. Tests may be refactored as long as it’s clear that the result is the same.
2.4.6 Removing tests
On those rare, wonderful occasions when we get to remove code, we should remove the tests for it, as well. If we liberate some functionality into a new package, the tests for that functionality should move to that package, too.
2.5 Client-side testing
Kuma has a suite of functional tests using pytest. All these test suites live in the /tests/ directory. The tests directory comprises of: • /headless contains tests that don’t require a browser • /utils contains helper functions.
2.5.1 Setting up test environment
1. In the terminal go to the directory where you have downloaded kuma. 2. Install the requirements the tests need to run: $ poetry install
2.5.2 Running the tests
The minimum amount of information necessary to run the tests against the staging server is the directory the tests are in. Currently, all of the tests should run successfully against the staging server.: $ poetry run pytest tests
This basic command can be modified to run the tests against a local environment, or to run tests concurrently.
Only running tests in one file
Add the name of the file to the test location: $ poetry run pytest tests/headless/test_cdn.py
28 Chapter 2. Getting started Kuma Documentation, Release latest
Run the tests against a different url
By default the tests will run against the staging server. If you’d like to run the tests against a different URL (e.g., your local environment) pass the desired URL to the command with --base-url: $ poetry run pytest tests --base-url http://mdn.localhost:8000
Run the tests in parallel
By default the tests will run one after the other but you can run several at the same time by specifying a value for -n: $ poetry run pytest tests -n auto
2.5.3 Using Alternate Environments
Run tests on MDN’s Continuous Integration (CI) infrastructure
If you have commit rights on the mdn/kuma GitHub repository you can run the functional tests using the MDN CI Infrastructure. Just force push to mdn/kuma@stage-integration-tests to run the tests against https://developer.allizom.org. You can check the status, progress, and logs of the test runs at MDN’s Jenkins-based multi-branch pipeline.
2.5.4 MDN CI Infrastructure
The MDN CI infrastructure is a Jenkins-based, multi-branch pipeline. The pipelines for all branches are defined by the Jenkinsfile and the files under the Jenkinsfiles directory. The basic idea is that every branch may have its own custom pipeline steps and configuration. Jenkins will auto-discover the steps and configuration by checking within the Jenkinsfiles directory for a Groovy (.groovy) and/or YAML (.yml) file with the same name as the branch. For example, the “stage-integration-tests” branch has a Jenkinsfiles/stage-integration-tests.yml file which will be loaded as configuration and used to determine what to do next (load and run the Groovy script specified by its pipeline.script setting - Jenkinsfiles/integration- tests.groovy - and the script, in turn, will use the dictionary of values provided by the job setting defined within the configuration). Note that the YAML files for the integration-test branches provide settings for configuring things like the Dockerfile to use to build the container, and the URL to test against. The integration-test Groovy files use a number of global Jenkins functions that were developed to make the build- ing, running and pushing of Docker containers seamless (as well as other cool stuff, see mozmar/jenkins-pipeline). They allow us to better handle situations that have been painful in the past, like the stopping of background Docker containers. The “prod-integration-tests” branch also has its own Jenkinsfiles/prod-integration-tests.yml file. It’s identical to the YAML file for the “stage-integration-tests” branch except that it specifies that the tests should be run against the production server rather than the staging server (via its job.base_url setting). Similarly, the “master” branch has it’s own pipeline, but instead of being configured by a YAML file, the entire pipeline is defined within its Jenkinsfiles/master.groovy file. The pipeline for any other branch which does not provide its own Groovy and/or YAML file will follow that defined by the Jenkinsfiles/default.groovy file. You can check the status, progress, and logs of any pipeline runs via MDN’s Jenkins-based multi-branch pipeline.
2.5. Client-side testing 29 Kuma Documentation, Release latest
2.5.5 Markers
• nondestructive Tests are considered destructive unless otherwise indicated. Tests that create, modify, or delete data are consid- ered destructive and should not be run in production. • smoke These tests should be the critical baseline functional tests.
2.5.6 Guidelines for writing tests
See Bedrock and the Web QA Style Guide.
2.6 Feature toggles
MDN uses feature toggles to integrate un-finished feature changes as early as possible, and to control the behavior of finished features. Some site features are controlled using django-waffle. You control these features in the django admin site’s waffle section. Some site features are controlled using constance. You control these features in the django admin site’s constance section.
2.6.1 Waffle features
Switches
Waffle switches are simple booleans - they are either on or off. • application_ACAO - Enable Access-Control-Allow-Origin=0 header. • foundation_callout - Show foundation donate homepage tile • helpful-survey-2 - Enable 2017 Helpfulness Survey • registration_disabled - Enable/disable new user registration. • store_revision_ips - Save request data, including the IP address, to enable marking revisions as spam. • welcome_email - Send welcome email to new user registrations. • wiki_spam_training - Call Akismet to check submissions, but don’t block due to detected spam or Ak- ismet errors. • developer_needs - Enable/disable the developer needs survey banner
Flags
Waffle flags control behavior by specific users, groups, percentages, and other advanced criteria. • contrib_beta - Enable/disable the contributions popup and pages • kumaediting - Enable/disable wiki editing. • page_move - (deprecated) enable/disable page move feature.
30 Chapter 2. Getting started Kuma Documentation, Release latest
• section_edit - Show section edit buttons. • sg_task_completion - Enable the Survey Gizmo pop-up. • spam_admin_override - Tell Akismet that edits are never spam. • spam_checks_enabled - Toggle spam checks site wide. • spam_spammer_override - Tell Akismet that edits are always spam. • spam_submissions_enabled - Toggle Akismet spam/spam submission ability. • spam_testing_mode - Tell Akismet that edits are tests, not real content. • subscription_banner - Enable/disable the subscriptions banner • subscription_form - Enable/disable the subscriptions form
2.6.2 Constance features
Constance configs let us set operational values for certain features in the database - so we can control them without changing code. They are all listed and documented in the admin site’s constance section.
2.6.3 Using Traffic Cop for A/B Testing
Traffic Cop is a lightweight JavaScript library that allows conducting content experiments on MDN without the need for third party tools such as Optimizely. Traffic Cop also respects user privacy, and will not run if Do Not Track is enabled. On MDN, Traffic Cop will also not run for logged in users. Here’s how to implement a Traffic Cop content experiment. The first task is to choose your control page and create your tests pages Here we will use the scenario of testing a new feature on MDN. The new feature involved replacing the static examples on top, with an interactive editor. First we choose our control page. For this example, we will use: https://developer.mozilla.org/docs/Experiment:ExperimentName/Array.prototype.push() Next we need our test page. The convention on MDN is to first create a base page that follows a naming convention such as: https://developer.mozilla.org/docs/Experiment:ExperimentName/ With the base page created, we next create out test variation of our chosen page. To do this, navigate to: https://developer.mozilla.org/docs/Experiment:ExperimentName/Array.prototype.push() You should should now be presented with an editing interface to create the new page. Switch back to the control page, and click the Edit button. Switch the editor to source mode and copy all of the contents inside the editor. Switch back to our test, ensure the editor is set to source mode, and paste the content you just copied. You can now go ahead and make the changes you wish to user test. Once you are complete, go ahead and publish the page.
The JavaScript
Let us first add the JavaScript code to initialise and configure Traffic Cop. Inside kuma\static\js, create a new file called some like experiment-experiment-name.js and paste the following code into the file:
2.6. Feature toggles 31 Kuma Documentation, Release latest
1 (function() {
2 'use strict';
3
4 var cop= new Mozilla.TrafficCop({
5 id: 'experiment-experiment-name',
6 variations:{
7 'v=a': 50, // control
8 'v=b': 50 // test
9 }
10 });
11
12 cop.init();
13 })(window.Mozilla);
This will initialise Traffic Cop, set up an experiment with the id experiment-experiment-name, and lastly define a 50/50 split between the control, and the test page.
Define your bundle
Next we need to add an entry into kuma\settings\common.py to identify the test, and the related JS that will be injected. Find the following line in common.py: PIPELINE_JS = {
Just before the closing } add a block such as the following:
1 'experiment-experiment-name':{
2 'source_filenames':(
3 'js/libs/mozilla.dnthelper.js',
4 'js/libs/mozilla.cookiehelper.js',
5 'js/libs/mozilla.trafficcop.js',
6 'js/experiment-experiment-name.js',
7 ),
8 'output_filename':'build/js/experiment-experiment-name.js',
9 },
NOTE: The key here experiment-experiment-name needs to match the id you specified in your JS file above.
Identify your A/B test pages
The final step is to identify the pages to the back-end so it will know where to direct traffic based on the URL parameter that will be added by Traffic Cop. Inside kuma\settings\content_experiments.json add the following:
1 [
2 {
3 "id": "experiment-experiment-name",
4 "ga_name": "experiment-name",
5 "param": "v",
6 "pages":{
7 "en-US:Web/JavaScript/Reference/Global_Objects/Array/push":{
8 "a": "Web/JavaScript/Reference/Global_Objects/Array/push",
9 "b": "Experiment:ExperimentName/Array.prototype.push()"
10 },
11 }
12 }
13 ]
32 Chapter 2. Getting started Kuma Documentation, Release latest
There are a couple of important points to note here: 1. We are leaving of the domain, as well as the docs/ part of the url. 2. As with the entry in common.py, the id here matches the id in your JS file, tying it all together. 3. The param value, needs to match the string you specified inside the variations block in your JS 4. The first part of our key under pages above, identifies the locale to which this will apply, en-US in this case. 5. The key for the two pages listed next, needs to match the values you used as the parameter value inside variations in your JS file earlier.
Testing your experiment
With you local instance of Kuma running, navigate to the page you defined as your control. In this example: http://localhost:8000/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/push NOTE: You should not be logged in to MDN, and ensure that Do Not Track is disabled. Your experiment JavaScript code bundle defined in common.py should be injected into the page, and Traffic Cop will add a URL parameter to the page that is either v=a or v=b. Depending on which, you will either see the control(a), or the variation(b). You can also force a specific page to load by appending ?v=a or, ?v=a manually to the control page URL. If all the above works as expected, open up a pull request, and tag someone on MDN for reivew.
2.7 Celery and async tasks
Kuma uses Celery to enable asynchronous task processing for long-running jobs and to speed up the request-response cycle.
2.7.1 When is Celery appropriate
You can use Celery to do any processing that doesn’t need to happen in the current request-response cycle. Ask yourself the question: “Is the user going to need this data on the page I’m about to send them?” If not, using a Celery task may be a good choice. Some examples where Celery is used: • Sending notification emails on page changes - If this was done in the page change request, then pages with many watchers would be slower to edit, and errors in email sending would result in a “500 Internal Server Error” for the editor. • Populating the contributor bar - Gathering and sorting the contributor list can be a slow process that would delay rendering of stale pages. Instead, viewers quickly see the content, possibly without the contributor bar. Once the async task has populated the cache, future viewers get the content and the contributor bar. • Generating the spam moderator dashboard - Calculating spam statistics is a potentially long process, and if it takes more than 30 seconds the viewer will get a “502 Bad Gateway” error. The async task can take as long as needed, and the spam moderator will see the dashboard when the data is calculated and cached. • Rebuilding sitemaps - Search engines need recent site URL data, and it can’t be assembled quickly in the request for the sitemap. An async request repopulates the cached sitemap data, keeping this data fast and up-to- date. There are some downsides to async tasks that should also be considered:
2.7. Celery and async tasks 33 Kuma Documentation, Release latest
• Message passing, serialization, de-serialization, and data reloading increase the load on the entire system. • Async tasks require different testing strategies. It is easy to write a passing test that doesn’t reflect how the task is called in production. • Runtime errors are more difficult to troubleshoot with async tasks, due to missing context on how they were called. In general, it is better to get an algorithm right in the request loop, and only move it to an asynchronous task when it is identified as a performance issue.
2.7.2 Celery services
A working Celery installation requires several services.
Worker
Celery processes tasks with one or more workers. In Kuma, the workers and web processes share a code base, so that Django models, functions, and settings are available to async tasks, and web code can easily schedule async tasks. In Docker, the worker process runs in the worker service / container.
Broker
Kuma uses Redis as the message broker. There are some caveats to be aware of. Redis is used both in local develop- ment and in production.
Result store
When a task completes, it returns processed data and task states to a results store. Kuma doesn’t use returned data, but it does use returned task state to coordinate multi-step tasks. In particular for chord tasks such as building a sitemap of sitemaps after each locales’ sitemap is built. Referring to logging to see message about completion of Celery worker tasks.
Periodic tasks scheduler
Periodic tasks are scheduled with celery beat, which adds tasks to the task queue when they become due. It should only be run once in a deployment, or tasks may be scheduled multiple times. In Docker, it runs in the worker container by starting the celery process with --beat. In production, there are several task workers, and the celery beat process is run directly on just one worker. All scheduled periodic tasks are configured in code. As a pattern, each Django app (e.g. kuma.wiki) that has tasks to run periodically are wired up within the ready method of each app’s app config class (e.g. kuma.wiki.apps.WikiConfig).
2.7.3 Configuring and running Celery
We set some reasonable defaults for Celery in kuma/settings/common.py. These can be overridden by the environment variables, including:
34 Chapter 2. Getting started Kuma Documentation, Release latest
• CELERY_TASK_ALWAYS_EAGER Default: False (Docker), True (tests). When True, tasks are executed immediately, instead of being scheduled and executed by a worker, skipping the broker, results store, etc. In theory, tasks should act the same whether executed eagerly or normally. In practice, there are some tasks that fail or have different results in the two modes, mostly due to database transactions. • CELERY_WORKER_CONCURRENCY Default: 4. Each worker will execute this number of tasks at the same time. 1 is a good value for debugging, and the number of CPUs in your environment is good for performance. The worker can also be adjusted at the command line. For example, this could run inside the worker service con- tainer: celery -A kuma.celery:app worker --log-level=DEBUG -c 10
This would start Celery with 10 worker threads and a log level of DEBUG.
2.8 Search
Kuma uses Elasticsearch to power its on-site search.
2.8.1 Installed version
Elasticsearch releases new versions often. They release a new minor version (for example, 6.2.0, 6.3.0) every 30 to 90 days, and support the most recent minor version. They release a major version (5.0.0, 6.0.0) every 12 to 18 months, and support the last minor version of the previous major release (for example, 2.4.x supported until 6.x is released, 5.6.x supported until 7.x is released). Kuma’s goal is to update to the last minor version of the previous major release, so we expect to update every 12 to 18 months. Kuma is running on ElasticSearch 6.7.1 in development and production, which is planned to be supported until September 2020. The Kuma search tests use the configured Elasticsearch, and can be run inside the web Docker container to confirm the engine works: py.test kuma/search/
2.8.2 Indexing documents
To see search results, you will need to index the documents.
Using the Admin
This process works both in a development environment and in production: • Open the Django admin search index list view. On Docker, this is located at http://localhost.org:8000/admin/search/index/. • On the search index list, select the checkbox next to the each index. Then select “Delete selected Indexes” from the dropdown menu.
2.8. Search 35 Kuma Documentation, Release latest
• Inside the web Docker container: docker-compose exec web ./manage.py reindex
• Refresh the search index list until the “populated” field changes to a green checkbox image. In production, you will also get an email notifying you when the index is populated. • Select the checkbox next to the populated index, then choose “Promote selected search index to current index” in the actions dropdox. Click “Go” to promote the index. • All three fields will be green checkboxes (promoted, populates, and “is current index?”). The index is live, and will be used for site search. Similarly you can also demote a search index and it will automatically fall back to the previously created index (by created date). That helps to figure out issues in production and should allow for a smooth deployment of search index changes. It’s recommended to keep a few search indexes around just in case. If no index is created in the admin UI the fallback “main_index” index will be used instead. When you delete a search index in the admin interface, it is deleted on Elasticsearch as well.
Using the shell
Inside the web Docker container: ./manage.py reindex
This will populate and activate the fallback index “main_index”. It will be overwritten when the search tests run.
2.8.3 Search Filters
The search interface uses Filters and Filter Groups to determine what results are shown. Currently, the only active Filter Group is “Topics”. The Filters collect documents by tag. For example, the Filter “Add-ons & Extensions” collects all documents tagged with “Add-ons”, “Extensions”, “Plugins”, or “Themes”. Most filters have a single associated tag. For example, the “CSS” filter collects the documents with the “CSS” tag. Filter and Filter Groups are defined in the database, but the names are shown to users. The names of these objects (enabled and disabled) are listed in kuma/search/names.py, and marked for translation. To generate this file, run this command against the production database, and copy the output to names.py: ./manage.py generate_search_names
The sample database contains the active subset of Filters and Filter Groups. This won’t match the production database, and so the development environment should not be used to generate kuma/search/names.py.
2.9 Localization
Kuma is localized with gettext. User-facing strings in the code or templates need to be marked for gettext localization. We use Pontoon to provide an easy interface to localizing these files. Pontoon allows translators to use the web UI as well as download the PO files, use whatever tool the translator feels comfortable with and upload it back to Pontoon. The strings are stored in a separate repository, https://github.com/mozilla-l10n/mdn-l10n
Note: We do not accept pull requests for updating translated strings. Please use Pontoon instead.
36 Chapter 2. Getting started Kuma Documentation, Release latest
See the Django documentation on Translations for how to make strings marked for translation in Python and templates. Unless otherwise noted, run all the commands in this document inside the development environment. For Docker on Linux, set UID in .env or enter the environment with docker-compose run --rm --user $(id -u) web bash, to ensure that created files are owned by your development user.
2.9.1 Build the localizations
Localizations are found in this repository under the locale folder. This folder is a git submodule, linked to the mdn-l10n repository. The gettext portable object (.po) files need to be compiled into the gettext machine object (.mo) files before trans- lations will appear. This is done once during initial setup and provisioning, but will be out of date when the kuma locales are updated. To refresh the translations, first update the submodule in your host system: git submodule update --init --depth=10 locale
Next, enter the development environment, then: 1. Compile the .po files: make localecompile
2. Update the static JavaScript translation catalogs: make compilejsi18n
3. Collect the built files so they are served with requests: make collectstatic
2.9.2 Update the localizations in Kuma
To get the latest localization from Pontoon users, you need to update the submodule.
Note: This task is done by MDN staff or by automated tools during the push to production. You should not do this as part of a code-based Pull Request.
On the host system: 1. Create a new branch from kuma master:
git remote -v | grep origin # Should be main kuma repo git fetch origin git checkout origin/master git checkout -b update-locales # Pick your own name. Can be combined # with updating the kumascript submodule.
2. Update the locale submodule to master: cd locale git fetch git checkout origin/master cd ..
2.9. Localization 37 Kuma Documentation, Release latest
3. Commit the update: git commit locale -m "Updating localizations"
4. Push to GitHub (your fork or main repository), and open a Pull Request. It is possible to break deployments by adding a bad translation. The TravisCI job TOXENV=locales will test that the deployment should pass, and should pass before merging the PR.
2.9.3 Update the localizable strings in Pontoon
When localizable strings are added, changed, or removed in the code, they need to be gathered into .po files for trans- lation. The TravisCI job TOXENV=locales attempts to detect when strings change by displaying the differences in locale/templates/LC_MESSAGES/django.pot, but only when a msgid changes. When this happens, the strings need to be exported to the mdn-l10n repository so that they are available in Pontoon. If done incorrectly, then the work of localizers can be lost. This task is done by staff when preparing for deployment.
2.9.4 Add a new locale to Pontoon
The process for getting a new locale on MDN is documented at Starting a new MDN localization. One step is to enable translation of the UI strings. This will also enable the locale in development environments and on https://developer.allizom.org.
Note: This task is done by MDN staff.
This example shows adding a Bulgarian (bg) locale. Change bg to the locale code of the language you are adding. 1. Updating the localizable strings in Pontoon as above, so that your commit will be limited to the new locale. 2. In kuma/settings/common.py, add the locale to ACCEPTED_LOCALES and CANDIDATE_LOCALES, and increase PUENTE[’VERSION’]. 3. Download the latest languages.json from https://product-details.mozilla.org/1.0/languages.json and place it at kuma/settings/languages.json. 4. Add the locale to translate_locales.html and the locale/ folder: make locale LOCALE=bg
5. Generate the compiled files for all the locales, including the new one: make localerefresh
6. Restart the web server and verify that Django loads the new locale without errors by visiting the locale’s home page, for example http://localhost:8000/bg/. 7. Commit the locale submodule and push to mdn-l10n, as described above in Updating the localizable strings in Pontoon. The other locales should include a new string representing the new language. 8. (Optional) Generate migrations that includes the new locale: ./manage.py makemigrations users wiki --name update_locale
9. Commit the changes to locale, jinja2/includes/translate_locales.html, and kuma/settings, and open a Pull Request. 10. Enable the language in Pontoon, and notify the language community to start UI translations.
38 Chapter 2. Getting started Kuma Documentation, Release latest
2.9.5 Enable a new locale on MDN
Once the new translation community has completed the rest of the process for starting a new MDN localization, it is time to enable the language for page translations:
Note: This task is done by MDN staff.
1. Remove the locale from CANDIDATE_LOCALES in kuma/settings/common.py. Ensure it remains in ACCEPTED_LOCALES. 2. Restart the web server and verify that Django loads the new locale without errors by visiting the locale’s home page, for example http://localhost:8000/bg/. 3. Commit the change to kuma/settings/common.py and open a Pull Request. When the change is merged and deployed, inform the localization lead and the community that they can begin trans- lating content.
2.10 CKEditor
MDN Web Docs uses a WYSIWYG editor called CKEditor. CKEditor is an open source utility which brings the power of rich text editing to the web. This document details how to update CKEditor within the MDN codebase.
2.10.1 Building CKEditor
To rebuild CKEditor, run this on the host system: cd assets/ckeditor4/ ./docker-build.sh # Creates a Java container, builds CKEditor docker-compose exec web make build-static
This builds CKEditor within a Docker VM, using the Java ckbuilder tool from CKSource, then builds static resources so that the updated editor is installed where it belongs. To rebuild CKEditor locally, if you have Java installed: cd assets/ckeditor4/ ./build.sh docker-compose exec web make build-static
Portions of the build process will take a few minutes so don’t expect an immediate result.
2.10.2 Updating CKEditor
To update the CKEditor version, you’ll need to edit build.sh and change the value of CKEDITOR_VERSION. Updating is important to keep MDN in sync with CKEditor’s functional and security updates.
2.10.3 Updating CKEditor plugins
Some plugins are maintained by MDN staff in the Kuma repo. Others are updated to the tag or commit number specified in build.sh: • descriptionlist
2.10. CKEditor 39 Kuma Documentation, Release latest
• scayt • wsc • wordcount Change to CKEditor plugins which are bundled into Kuma should be made in the directory /as- sets/ckeditor4/source/plugins/. Once you’ve made changes to a plugin, be sure to build the editor and the static resources again, as described in Building CKEditor.
2.10.4 Interactively developing CKEditor plugins
For small changes, it may be fastest to rebuild CKEditor for each change. For more involved changes, you can load CKEditor in development mode. To interactively develop CKEditor plugins, first download the CKEditor source code on the host system: cd assets/ckeditor4/source/ ./build.sh --download
Enable development mode by adding a line to .env: CKEDITOR_DEV=True
Restart Docker with docker-compose restart web to apply the new setting. This will switch to a mode where plugins are loaded dynamically, based on the list in /kuma/wiki/jinja2/wiki/includes/ckeditor_scripts.html, and you can see your changes by reloading the browser. When you are done, disable development mode in .env: CKEDITOR_DEV=False # Or comment out the line
Restart Docker with docker-compose restart web, and follow the instructions above for Building CKEditor, to delete the CKEditor source code and rebuild CKEditor with your changes.
2.10.5 Committing Changes
When updating CKEditor, adding a plugin, or changing the configuration, CKEditor should be rebuilt, and the results added to a new pull request. We currently do not check-in CKEditor source, but do add third-party plugin sources. We check in the “built” files, which combine CKEditor with plugins and configuration, so that they do not need to be rebuilt by the static files process. This is enforced by .gitignore, so git add can be used: git add assets/ckeditor4
Reviewers should rebuild CKEditor to ensure this was done correctly. Building is not atomic, because a hex-encoded timestamp is embedded in some minified files: assets/ckeditor4/build/ckeditor/ckeditor.js assets/ckeditor4/build/ckeditor/skins/moono/editor.css assets/ckeditor4/build/ckeditor/skins/moono/editor_*.css
Small changes (with big diffs) to these files are expected. Changes to other files are not expected.
40 Chapter 2. Getting started Kuma Documentation, Release latest
2.11 Getting Data
A Kuma instance without data is useless. You can view the front page, but almost all interactive testing requires users, wiki documents, and other data.
2.11.1 The Sample Database
The sample database has a minimal set of data, based on production data, that is useful for manual and automated testing. This includes: • All documents linked from the English homepage, including: – Translations – Some historical revisions – Minimal author profiles for revisions • Additional documents for automated testing and feature coverage • Good defaults for contance variables KUMASCRIPT_TIMEOUT and KUMASCRIPT_MAX_AGE • Waffle flags and switches • Search filters and tags (but not a search index, which must be created locally - see Indexing documents) • The Mozilla Hacks feed • Test users, with appropriate groups and permissions: – test-super - A user with full permissions – test-moderator - A staff content moderator – test-new - A regular user account – test-banned - A banned user – viagra-test-123 - An unbanned spammer Test accounts can be accessed by entering the password test-password in the Django admin, and then returning to the site. See Load the sample database for instructions on loading the latest sample database.
2.11.2 Add an MDN User
If you need a user profile that is not in the sample database, you can scrape it from production or another Kuma instance, using the scrape_user command. In the container (after make bash or similar), run the following, replacing username with the desired user’s username: ./manage.py scrape_user username ./manage.py scrape_user https://developer.mozilla.org/en-US/profiles/username
Some useful options: --email [email protected] Set the email for the user, which can’t be scraped from the profile. With the correct email, the user’s profile image will be available. --social Scrape social data for the user, which is not scraped by default --force If a user exists in the current database, update it with scraped data.
2.11. Getting Data 41 Kuma Documentation, Release latest
For full options, see ./manage.py scrape_user --help A local user can be promoted to staff with the command: ./manage.py ihavepower username --password=password
2.11.3 Add an MDN Wiki Document
If you need a wiki page that is not in the sample database, you can scrape it from production or another Kuma instance, using the scrape_document command. In the container (after make bash or similar), run the following, using the desired URL: ./manage.py scrape_document https://developer.mozilla.org/en-US/docs/Web/CSS/display
You can also pass in multiple URLs instead of doing one URL at a time. Scraping a document includes: • The parent documents (such as Web and Web/CSS for Web/CSS/display) • A revision for each document, and the author for each revision • The English variant, if a translation is requested • Redirects, if a page move is detected It does not include: • Attachment data, or the attachments themselves. These will continue to use production URLs. Some useful options: --revisions REVS Scrape more than one revision --translations Scrape the translations of a page as well --depth DEPTH Scrape one or more levels of child pages as well --force Refresh an existing Document, instead of skipping For full options, see ./manage.py scrape_document --help
2.11.4 Add Documents Linked from a Page
If you need all the documents linked from a page in production or another Kuma instance, you can use the scrape_links command. In the container (after make bash or similar), run the following, using the desired URL: ./manage.py scrape_links # Scrape the homepage ./manage.py scrape_links https://developer.mozilla.org/en-US/Web/CSS/display
This treats a page a lot like a web crawler would, looking for wiki document links with the same locale from: • The header • The footer • The content • KumaScript-rendered sidebars and content This can result in a lot of traffic. There are options that don’t affect the initial link scrape, but that are passed on to the scraped documents:
42 Chapter 2. Getting started Kuma Documentation, Release latest
--revisions REVS Scrape more than one revision --translations Scrape the translations of a page as well --depth DEPTH Scrape one or more levels of child pages as well
2.11.5 Create the Sample Database
These scraping tools are used to create a sample database of public information, which is used for development environments and functional testing without exposing any private production data. When it is time to create a new sample database, an MDN staff person runs the commamd in the the container: time scripts/create_sample_db.sh
This takes 2 to 2½ hours with a good internet connection. This is then uploaded to the mdn-downloads site: • https://mdn-downloads.s3-us-west-2.amazonaws.com/index.html • https://mdn-downloads.s3-us-west-2.amazonaws.com/mdn_sample_db.sql.gz This uses the specification at etc/sample_db.json, which includes the sources for scraping, as well as fixtures needed for a working development and testing environment.
2.11.6 Load Custom Data
The sample_mdn command does the work of creating the sample database. It can also be used with a different specification to load custom fixtures and scrape additional data for your local environment. For example, loading a new sample database wipes out existing data, so you’ll need to run the instructions in Update the Sites section again. Instead, you can create a specification for your development user and GitHub OAuth application: { "sources":[ ["user", "my_username", { "social": true, "email":"[email protected]" } ] ], "fixtures":{ "users.user":[ { "username":"my_username", "email":"[email protected]", "is_staff": true, "is_superuser": true } ], "socialaccount.socialapp":[ { "name":"GitHub", "client_id":"client_id_from_github", "secret":"secret_from_github", "provider":"github", "sites":[[1]] }
2.11. Getting Data 43 Kuma Documentation, Release latest
], "socialaccount.socialaccount":[ { "uid":"uid_from_github", "user":["my_username"], "provider":"github" } ], "account.emailaddress":[ { "user":["my_username"], "email":"[email protected]", "verified": true } ] } }
To use this, you’ll need to replace the placeholders: • my_username - your MDN username • [email protected] - your email address, verified on GitHub • client_id_from_github - from your GitHub OAuth app • secret_from_github - from your GitHub OAuth app • uid_from_github - from your MDN SocialAccount Save it, for example as my_data.json, and, after loading the sample database, load the extra data: ./manage.py sample_mdn my_data.json
This will allow you to quickly log in again using GitHub auth after loading the sample database.
2.11.7 Anonymized Production Data
The production database contains confidential user information, such as email addresses and authentication tokens, and it is not distributed. We try to make the sample database small but useful, and provide scripts to augment it for specific uses, reducing the need for production data. Production-scale data is occasionaly needed for development, such as testing the performance of data migrations and new algorithms, and for the staging site. In these cases, we generate an anonymized copy of production data, which deletes authentication keys and anonymizes user records. This is generated with the script scripts/clone_db.py on a recent backup of the production database. You can see a faster and less resource-intensive version of the process by running it against the sample database: scripts/clone_db.py -H mysql -u root -p kuma -i mdn_sample_db.sql.gz anon_db
This will generate a file anon_db-anon-20170606.sql.gz, where the date is today’s date. To check that the anonymize script ran correctly, load the anonymized database dump and run the check script: zcat anon_db-anon-20170606.sql.gz | ./manage.py dbshell cat scripts/check_anonymize.sql | ./manage.py dbshell
This runs a set of counting queries that should return 0 rows. A similar process is used to anonymize a recent production database dump. The development environment is not tuned for the I/O, memory, and disk requirements, and will fail with an error. Instead, a host-installed version of MySQL
44 Chapter 2. Getting started Kuma Documentation, Release latest is used, with the custom collation. The entire process, from getting a backup to uploading a confirmed anonymized database, takes about half a day. We suspect that a clever user could de-anonymize the data in the full anonymized database, so we do not distribute it, and try to limit our own use of the database.
2.12 Documentation
This documentation is generated and published at Read the Docs whenever the master branch is updated. GitHub can render our .rst documents as ReStructuredText, which is close enough to Sphinx for most code reviews, without features like links between documents. It is occasionally necessary to generate the documentation locally. It is easiest to do this with a virtualenv on the host system, using Docker only to regenerate the MDN Sphinx template. If you are not comfortable with that style of development, it can be done entirely in Docker using docker-compose.
2.12.1 Generating documentation
Sphinx uses a Makefile in the docs subfolder to build documentation in several formats. MDN only uses the HTML format, and the generated document index is at docs/_build/html/index.html. To generate the documentation in a virtualenv on the host machine, first install the requirements: pip install -r docs/requirements.txt
Then switch to the docs folder to use the Makefile: cd docs make html python -m webbrowser file://${PWD}/_build/html/index.html
To generate the documentation with Docker: docker-compose run --rm web sh -c "\ python -m venv /tmp/.venvs/docs && \ . /tmp/.venvs/docs/bin/activate && \ pip install -r /app/docs/requirements.txt && \ cd /app/docs && \ make html" python -m webbrowser file://${PWD}/docs/_build/html/index.html
A virtualenv is required, to avoid a pip bug when changing the version of a system-installed package.
2.13 Docker
Docker is used for development and for deployment.
2.13.1 Docker Images
Docker images are used in development, usually with the local working files mounted in the images to set behaviour.
2.12. Documentation 45 Kuma Documentation, Release latest
Images are built by Jenkins, after tests pass, and are published to DockerHub. We try to store the configuration in the environment, so that the published images can be used in deployments by setting environment variables to deployment-specific values. Here are some of the images used in the Kuma project:
kuma
The kuma Docker image builds on the kuma_base image, installing a kuma branch and building the assets needed for running as a webservice. The environment can be customized for different deployments. The image can be recreated locally with make build-kuma. The image tagged latest is used by default for development. It can be created locally with make build-kuma VERSION=latest. The official latest image is created from the master branch in Jenkins and published to Docker- Hub. kuma_base
The kuma_base Docker image contains the OS and libraries (C, Python, and Node.js) that support the kuma project. The kuma image extends this by installing the kuma source and building assets needed for production. The image can be recreated locally with make build-base. The image tagged latest is used by default for development. It can be created locally with make build-base VERSION=latest. The official latest image is created from the master branch in Jenkins and published to Docker- Hub kumascript
The kumascript Docker image contains the kumascript rendering engine and support files. The image must be created from the kuma repo since it depends upon that repo’s locale sub-module. From the kuma repo’s root directory, use make build-kumascript for an image tagged with the current commit-hash, make build-kumascript KS_VERSION=latest for one tagged with latest, or make build-kumascript-with-all-tags for an image tagged with both. The image tagged latest is used by default for development, while an image tagged with a commit-hash can be used for deployment. The official images are created from the master branch in Jenkins and published to DockerHub.
2.14 Deploying MDN
MDN is served from Amazon Web Services in the US West (Oregon) datacenters. MDN is deployed as Docker containers managed by Kubernetes. The infrastructure is defined within MDN’s infra repository, including the tools used to deploy and maintain MDN. Deploying new code to AWS takes several steps, and about 60 minutes for the full process. The deployer is responsible for emergency issues for the next 24 hours. There are 1-3 deployments per week, usually between 7 AM and 1 PM Pacific, Monday through Thursday.
Note: This page describes deployment performed by MDN staff. It requires additional setup and permissions not described here. Deployments will not work for non-staff developers, and should not be attempted.
46 Chapter 2. Getting started Kuma Documentation, Release latest
2.14.1 Pre-Deployment
Before deploying, a staff member should: • (Once a week) Check that the latest mdn-browser-compat-data node package is included in the KumaScript image. The latest package is installed when the image is created, during the RUN npm update step, and can be verified by reading the build logs or running the image. • (Once a week) Update the locale and kumascript submodules. The locale submodule must be updated to apply new translations to production. The kumascript submodule is not used for deployment, but updating it keeps the development environment in sync. – Commit or stash any changes in your working copy. – Create a pre-push branch from master: git fetch origin git checkout -b pre-push-`date +"%Y-%m-%d"` origin/master
– Update the submodules: git submodule update --remote git commit -m "Updating submodules" kumascript locale
– Push new localizable strings. This step is only necessary when there are new strings. The TravisCI job TOXENV=locales checks for new strings, and you can examine the output of the recent master build to see if there are differences. At the same time, it’s not too hard to run locally and then revert if there are no useful changes. 1. On the host system, checkout the master locale branch:
cd locale git checkout master git pull
2. Update kuma/settings/common.py, and bump the version in PUENTE[’VERSION’]. 3. Inside the development environment, extract and rebuild the translations:
make localerefresh
4. On the host system (still in the locale subfolder), review the changes to source English strings:
git diff templates/LC_MESSAGES
5. If there are useful changes (ignore comment lines and look for msgid as the changed line), commit the files in the locale submodule:
git add --all . git commit
For the commit message, use the PUENTE[’VERSION’] in the commit subject, and summarize the string changes in the commit body, like:
Update strings 2018.08
* Add "View All" text for document history pagination
Attempt to push to the mdn-l10n repository:
2.14. Deploying MDN 47 Kuma Documentation, Release latest
git push
If this fails, do not force with –force, or attempt to pull and create a merge commit. Someone has added a translation while you were working, and you need to start over to preserve their work:
git fetch git reset --hard @{u}
This resets your locale submodule to the new master. Start over on step 3 (make localerefresh). If the push to mdn-l10n is a success, commit your Kuma changes:
cd .. git commit kuma/settings/common.py locale
You can use the same commit message used for the locale commit. 6. If there are no new strings, throw the changes away, starting in the locale folder:
git checkout -- . cd .. git checkout -- kuma/settings/common.py
– Push the branch and open a pull request: git push -u origin pre-push-`date +"%Y-%m-%d"`
– Check that tests pass. The TOXENV=locales job, which uses Dennis to check strings, is the job that occasionally fails. This is due to a string change committed in Pontoon that will break rendering in produc- tion. If this happens, fix the incorrect translation, push to master on mdn-l10n, and update the submodule in the pull request. Wait for the tests to pass. – Merge the pull request. A “Rebase and merge” is preferred, to avoid a merge commit with little long-term value. Delete the pre-push branch. • Check the latest images from master are built by Jenkins(Kuma master build, KumaScript master build, both private to staff), and uploaded to the DockerHub repositories (Kuma images, KumaScript images).
2.14.2 Deploy to Staging
The staging site is located at https://developer.allizom.org. It runs on the same Kuma code as production, but against a different database, other backing services, and with less resources. It is used for verifying code changes before pushing to production. • Start the staging push, by updating and pushing the stage-push branches: git fetch origin git checkout stage-push git merge --ff-only origin/master git push cd kumascript git fetch origin git checkout stage-push git merge --ff-only origin/master git push cd ..
• Prepare for testing on staging:
48 Chapter 2. Getting started Kuma Documentation, Release latest
– Look at the changes to be pushed (What’s Deployed on Kuma, and What’s deployed on KumaScript). To enlist the help of pull request authors and others, you can report bug numbers and PRs in Matrix. – Think about manual tests to confirm the code changes work without errors. • Merge and push to the stage-integration-tests branch: git checkout stage-integration-tests git merge --ff-only origin/master git push
This will kick off functional tests in Jenkins, which will also report to #mdndev. • Manually test changes on https://developer.allizom.org. Look for server errors on homepage and article pages. Try to verify features in the newly pushed code. Check the functional tests. • Announce in Slack (#mdn-dev) that staging looks good, and you are pushing to production.
2.14.3 Deploy to Production
The production site is located at https://developer.mozilla.org. It is monitored by the development team and MozMEAO. • Pick a push song on https://www.youtube.com. Post link to Matrix. • Start the production push: git fetch origin git checkout prod-push git merge --ff-only origin/master git push cd kumascript git fetch origin git checkout prod-push git merge --ff-only origin/master git push cd ..
• For the next 30-60 minutes, – Watch https://developer.mozilla.org – Monitor MDN in New Relic for about an hour after the push, for increased errors or performance changes. – Start the standby environment deployment – Close bugs that are now fixed by the deployment – Move relevant Taiga cards to Done – Move relevant Paper cut cards to Done
2.14.4 Deploy to Standby Environment
The standby environment is located in the AWS EU Frankfurt datacenter. It runs the same code and database as production, but runs in read-only maintenance mode and on minimal resources. It will be scaled up and handle MDN traffic if there is a critical failure in the AWS US West datacenter. • Start the standby environment push:
2.14. Deploying MDN 49 Kuma Documentation, Release latest
git fetch origin git checkout standby-push git merge --ff-only origin/master git push cd kumascript git fetch origin git checkout standby-push git merge --ff-only origin/master git push cd ..
2.15 Rendering Documents
Kuma’s wiki format is a restricted subset of HTML, augmented with KumaScript macros {{likeThis}} that render to HTML for display. Rendering is done in several steps, many of which are stored in the database for read-time performance.
A revision goes through several rendering steps before it appears on the site: 1. A user submits new content, and Kuma stores it as Revision content 2. Kuma bleaches and filters the content to create Bleached content 3. KumaScript renders macros and returns KumaScript content
50 Chapter 2. Getting started Kuma Documentation, Release latest
4. Kuma bleaches and filters the content again to create Rendered content 5. Kuma divides and processes the content into Body HTML, Quick links HTML, ToC HTML, Summary text and HTML There are other rendered outputs: • Kuma normalizes the Revision content into the Diff format to comparing revisions • Kuma filters the Revision content and adds section IDs to create the Triaged content for updating pages • KumaScript renders the Triaged content as Preview content • Kuma filters the Bleached content and adds section IDs to publish the Raw content • Kuma extracts code sections from the Rendered content to flesh out a Live sample • Kuma extracts the text from the Rendered content for the Search content
2.15.1 Revision content
CKEditor provides a visual HTML editor for MDN writers. The raw HTML returned from CKEditor is stored in the Kuma database for further processing. source User-entered content, usually via CKEditor from the edit view (URLs ending with $edit) Developer-submitted content via an HTTP PUT to the API view (URLs ending in $api) displayed on MDN “Revision Source” section of the revision detail view (URLs ending with $revision/ {{ CSSRef }} tag database wiki_revision.content code kuma.wiki.models.Revision.content To illustrate rendering, consider a new document published at /en-US/docs/Sandbox/simple with this Revision content:
I am a simple document with a CSS sidebar.
I am red.
Some Links
- The HTML Reference
- {{HTMLElement('div')}}
- A new document
This document has elements that highlight different areas of rendering: • A sidebar macro CSSRef, which will be rendered by KumaScript and extracted for display •A
tag, which will gain an id attribute • A list of three links:
2.15. Rendering Documents 51 Kuma Documentation, Release latest
1. An HTML link to an existing document 2. A reference macro HTMLElement which will be rendered by KumaScript 3. An HTML link to a new document, which will get rel="nofollow" and class="new" at- tributes • An onclick attribute, added in Source mode, which will be removed •A