The Art of Making Programming Problems

Hello! Today we’ll explore the wonderful world of creating problems for programming contests. This article could be used as a sort of guide if you’re interested in writing problems yourself, but it’s intended for anyone interested in the process or philosophy behind making problems.   The scope of the article is quite broad, and in its four parts, we will discuss (1) why to make programming problems, (2) how to invent problems, (3) how to plan a contest, and (4) how to prepare problems. Do feel free to skip around if any of these topics tickle your fancy, or if you’re up for it, I invite you to read the article through from top to bottom. A lot of the thoughts presented here are based on my own modest experience with problem setting, mainly for university contests, informatics olympiads and miscellaneous educational purposes. Keep in mind that nothing I say is meant to prescribe a correct way to do things, just to describe what I’ve personally found effective for the work I’ve done. So

Introduction to Dynamic Programming

This is a quick 5 minute easy-to-read programme, for even the least experienced programmers.

So you wanna see c some new programming, or some new epic pro-gamer moves to up your competitive programming skills.

Well, look no further with this CPMsoc blog on dynamic programming!

Well, the obvious question is what is dynamic programming?

Dynamic programming is programming that is dynamic.

A quick google search would reveal (or at least according to Programiz), " technique in computer programming that helps to efficiently solve a class of problems that have overlapping subproblems and optimal substructure property."

What does this actually mean though?

The paramount idea regarding dynamic programming circulates around the idea of sub-programs; essentially taking a large problem and breaking it down into smaller problems. Then, the values of the smaller-problems are saved to be used later to solve the initial, larger problem.

Accordingly, the key benefit of dynamic programming in comparison to other modes of problem-solving, is by saving the result, is has the ability to optimize the solution by decreasing the repetition, thus increasing the overall-run time.

But before, we uncover more into dynamic programming, let's review recursion and memoization.


Welp, we find ourselves asking ourselves, what is meant by recursion.

Once again, a quick, frantic google search uncovers that recursion can be defined as "the determination of a succession of elements(such as numbers or functions) by operation on one or more preceding elements according to a rule or formula involving a finite number of steps".

We can reconceptualise recursion with the aid of an example.

Let’s imagine that we need to calculate how many stairs are in total for the UNSW Stair of Dooms

 Main Walkway. But since our programming overlords directors are busy the task falls on me. Now, due to let’s say my very limited mathematics skills and oversight fail in MATH1131, I lack the ability to count them incrementally. 

So, collectively, the members of CPMSoc came up with a solution.

Starting with me standing at the top of the staircase, on each step below me stands one CPMSoc member. So I ask the person in front of me how many steps there are, knowing that if the person in front of me says there is an "x" amount of stairs, there is in total and "x+1" amount of stairs. This is a subproblem; I solve for "x + 1" amount of steps, instead of the initial total. Then, the person in front of me asks the same question who asks the person in front, and this goes on and on, until there is no one in front to ask. Once, we reach the bottom of the staircase, where no one is standing, that is the base-case, of zero individuals standing

Now, we know that eventually, as each, person asks the person in-front, we will reach the bottom where no person is standing. This principle is referred to as "bottoming-out" in the context of recursion programming; a condition (in our case, the fact that at the bottom of the staircase there are ZERO people), that stops further calculation.

Below, enclosed are fragments of code that respectively use iteration and recursion. 

Recursion vs Iteration

#include <stdio.h>


int main (void){
    int counter = 0;
    int actual_number_of_steps = NUMBER_VALUE_OF_STAIRS;

    while(counter < actual_number_of_steps){

    printf("There is a total of %d steps", counter);

int recursive_function(n){
    if (n == 0){
        return 0;
    } else {
        return n + recurive_function(n-1)

Technical Corner: If you are interested in Fibonacci Code and the difference in memory allocation for recursion and iteration a great resource to check alongside some practice question check out a great resource here. For more, information about the context, check out there.

Hopefully you learnt something so far other than a lot of pain could be saved if UNSW chose to install elevators. 

But, with a dramatic drumroll, we move on to the next section of our blog to....


Well memoization relays on key, principles of memory, specifically, cache. 

Keeping in pattern of frantic-google-searches, an input of "What is a Cache" in Google prompts a swift response of "hardware or software that is used to store something, usually data, temporarily in a computing environment."

Cache is used often in our day-to-day lives; from deleting internet browser histories on how to arson UNSW to its benefits to programming optimization(a fun read on non-programming benefits can be found here). 

Memoization is a subset of cache, where specifically the return value in functions can be saved. In doing so, if a function is later called with the same input, the output can be generated, without need for calculation. 

This offers the key benefit of memoization: the optimisation benefits relaying to time-complexity as implementing memoization significantly reduces time of recalculating results. 

Whilst, this specific blog isn't going to focus as much as on the implementation of memoization, for those interested a great resource, I would recommend is here.

Now, as we move forward, we are going to look more closely at memoization in the context of dynamic programming.


The benefits of dynamic programming are exhibited through the steps of calculating the Fibonacci problem. 

NOTE: As an exercise, after reading the explanation below, try yourself writing a programme to solve for Fibonacci sequences. See, how many different ways and methods can be used to conjure a result.

It is calculated, recursively, where the sub-problem is established by calculating incrementally smaller numbers. From the diagram above, it evidences how it "bottoms-out", until it reaches base cases of fib(1) and fib(0).

Next, memoization is implemented to, save the values of fib(1) and fib(0), so when recursively called upon, the results are already stored. The specific, type of dynamic programming we talked about this week is top-down which utilises memoization and recursion. If you are curious, into the implementations of various programmes to solve the Fibonacci sequence and the difference in implementations, a good exercise would be to peruse this

The next blog on Dynamic Programming will answer your remaining the burning questions: is bottom up dynamic programming or top down dynamic programming the most elite, which has better time complexity. 

That's all from us, folks!

Popular posts from this blog

Implementation tips for competitive programming

Tips for Competitive Programming

Sqrt linked lists

Interactive Tasks

Introduction to Competitive Programming

The Art of Making Programming Problems

Fracturing Search

Union-find Disjoint set