This document will explain how to add a new Practice Exercise.
The simplest way to check what Practice Exercises have not yet been implemented is to go to the track's build page (e.g. https://exercism.org/tracks/csharp/build) and check the "Practice Exercises" section.
The data on the build page is updated once a day.
You can quickly scaffold a new Practice Exercise by running the bin/add-practice-exercise
script (source) from the track's root directory:
bin/add-practice-exercise <exercise-slug>
Optionally, you can also specify the exercise's difficulty (via -d
) and/or author's GitHub username (via -a
):
bin/add-practice-exercise -d 3 -a foobar <exercise-slug>
If you're working on a track repo without this file, feel free to copy them into your repo using the above source link.
Once the scaffolded files have been created, you'll then have to:
.meta/config.json
file:
authors
keyconfig.json
file:
practices
key (only required when the track has concept exercises)prerequisites
key (only required when the track has concept exercises)A key part of adding an exercise is adding tests. Roughly speaking, there are two options when adding tests for a Practice Exercise:
canonical-data.json
file as found in the problem-specifications repo.https://exercism.org/exercises/<slug>
to get an overview of which tracks have implemented a specific exercise).The second option can be particularly appealing, as it can give you results quickly. Keep in mind, though, that you should tweak the implementation to best fit your track. As an example, some tracks do not use classes but only work with functions. If your track usually works with objects though, you should adapt the implementation to what best fits your track.
Some tracks use a test generator to automatically (re-)generate an exercise's test file(s). Please check the track documentation to see if there is a test generator and if so, how to use it.
To ensure that it is possible to write code that passes the tests, an example implementation needs to be added.
The code does not have to be idiomatic, it only has to pass the tests.
You can verify the example implementation passes all the tests by running the bin/verify-exercises
script (source) from the track's root directory:
bin/verify-exercises <exercise-slug>
Use the output to verify that the example implementation passes all the tests.
If you're working on a track repo without this file, feel free to copy them into your repo using the above source link.
Under the hood, the bin/verify-exercises
script does several things:
The stub implementation file(s) provide a starting point for students.
We recommend stub files to have the minimal amount of code such that:
In practice, this means defining the functions/methods that are tested by the test suite. Tracks are free as to how they setup this code, as long as they ensure that the stub code initially fails all the tests.
Python:
def two_fer(name):
pass
Kotlin:
fun twofer(name: String): String {
TODO("Implement the function to complete the task")
}
The final step is to run the linter to check if the track's (configuration) files are properly structured - both syntactically and semantically.
First, make sure you have the latest version of configlet
by running:
bin/fetch-configlet
Then run the linter by running:
bin/configlet lint
Use the output to verify that all is well.
Once all is well, you can then Submit a Pull Request to the track's repository.
Before submitting, please read the Contributors Pull Request Guide and Pull Request Guide.
Ensure the PR description lists the exercise being added.