# Editorial

Problem Author: | Illia Fursov, Maksym Shvedchenko |

Prepared by: | Illia Shevchenko, Maksym Shvedchenko |

Editorial by: | Illia Fursov |

Group 1: $nâ‰¤400$.

To solve the problem with this constraint, you can just output $20$ "staircases" consisting of $20$ letters from "`a`

" to "`t`

" ("`abcd qrstabcd `

"). This way, all the letters will see at most $19$ "`t`

"'s and none of the other letters.

Implementation (C++):

#include <bits/stdc++.h> using namespace std; signed main() { string s = ""; for (int i = 0; i < 20; i++) for (int j = 0; j < 20; j++) s.push_back('a' + i); int n; cin >> n; for (int i = 0; i < n; i++) cout << s[i]; cout << endl; }

Time complexity: $O(20_{2}+N)$ (or $O(N)$ without pre-generating the string)

Memory complexity: $O(20_{2}+N)$ (or $O(1)$)

Group 2: $nâ‰¤8000$.

This block was added to let solutions with correct idea, but not completely correct implementation, get more points.

Group 3: $nâ‰¤100000$ (no additional constraints).

There are at least two possible ways to construct a needed string of this length.

The first idea is to always add the smallest such character that a string does not become scared after adding it.

Initially, you have an empty string. You can add a letter "`a`

" to it and it will be ok. Then you can add "`a`

" $20$ more times. Then, you cannot add the "`a`

" one more time, otherwise it will see the rest $21$ "`a`

"'s and the string will become scared. But you can add "`b`

" which will see only "`a`

"'s, and "`a`

" < "`b`

", so it will be ok. After that, you can add $20$ more "`a`

"'s, and the last one of them will see $19$ "`a`

"'s and one "`b`

". Then you can repeat the operations in a similar way. At some point, there will be $21$ "`b`

"'s, a lot of "`a`

"'s among them and one "`a`

" after the last "`b`

". After that, you cannot add the "`b`

" again since otherwise it will be scared by another "`b`

"'s, but you can add "`c`

" and "hide" all the characters before from the next ones.

The idea of implementation here is in maintaining the number of letters that will still be seen by the next one added. The following algorithm constructs a string with a length of almost 300,000, thus it is a correct solution:

// the variables in these cycles are the amounts of the corresponding letters that will still be seen for (int f = 0; f <= 20; f++) { for (int e = 0; e + f <= 20; e++) { for (int d = 0; d + e + f <= 20; d++) { for (int c = 0; c + d + e + f <= 20; c++) { for (int b = 0; b + c + d + e + f <= 20; b++) { for (int a = 0; a + b + c + d + e + f <= 20; a++) s.push_back('a'); s.push_back('b'); } s.push_back('c'); } s.push_back('d'); } s.push_back('e'); } s.push_back('f'); }

Time complexity: $O(20_{2}+N)$ (or $O(N)$)

Memory complexity: $O(20_{2}+N)$ (or $O(1)$)

This can also be done in a more neat and universal way:

int seen[26] = {0}; int sum = 0; for (int i = 0; i < n; i++) { int c = 0; bool out = false; if (sum > 20) { out = true; while (!seen[c]) ++c; sum -= seen[c]; seen[c] = 0; ++c; } assert(c <= 'f'); // this assert is here to show that the letters will not be very large with the given constraints ++sum, ++seen[c]; cout << char(c + 'a'); }

Time complexity: $O(N)$

Memory complexity: $O(26)$

By the way, the second code can generate strings with length of more than $10_{12}$ letters!

There is also a second way to solve the problem. Its idea is the same as the one used in Fenwick tree data structure. Here is its implementation:

#include <bits/stdc++.h> using namespace std; signed main() { int n; cin >> n; for (int i = 1; i <= n; i++) { int a = __builtin_ctz(i); cout << char('a' + a); } }

Time complexity: $âˆ¼$$O(N)$ (because of __builtin_ctz() function)

Memory complexity: $O(1)$

So, the idea is to output the $(p_{i}+1)$-th letter of the alphabet for every $i$ $(1â‰¤iâ‰¤n)$, where $2_{p_{i}}$ is the maximum power of $2$ that $i$ is divisible by.

To prove that this way every letter sees no more than $20$ letters that are greater than or equal to it, we can prove two facts for every $i$ $(1â‰¤iâ‰¤n)$:

1. There are not more than $20$ different letters that the $i$-th one sees

2. All the letters that the $i$-th one sees are different

The first fact comes from arithmetics. The largest letter that can be used in this implementation is the ($âŒˆg_{2}nâŒ‰$)-th one, since this is the largest power of $2$ that a number upto $n$ can be divisible by. $âŒˆg_{2}nâŒ‰â‰¤âŒˆg_{2}10_{5}âŒ‰=17<20$. Note that the number of letters used is at most $âŒˆg_{2}10_{5}âŒ‰+1$. But it is still $18<20$.

The second fact is based on the following: equal characters that the current one sees cannot be separated by the greater ones. Otherwise those of them that go before these greater ones, could not be seen. Suppose that there can be such characters. Then, in case of our implementation, it corresponds to the fact that there are two such numbers $i$ and $j$ ($i<j$) that $p_{i}=p_{j}$ and there is no such $k$ ($iâ‰¤kâ‰¤j$) that $p_{k}>p_{i}$. $i$ is divisible by $2_{p_{i}}$, as well as $j$, thus $(jâˆ’i)$ is also divisible by $2_{p_{i}}$.

If $j>i+2_{p_{i}}$, then $jâ‰¥i+2âˆ—2_{p_{i}}=i+2_{p_{i}+1}$ and $jâˆ’i+1>2_{p_{i}+1}$. But among $m$ or more consequent numbers there is at least one that is divisible by $m$, which shouldn't hold in this case.

If $j=i+2_{p_{i}}$, then $i=câ‹…2_{p_{i}}$ and $j=(c+1)â‹…2_{p_{i}}$ for some integer $câ‰¥0$. But either $c$ or $c+1$ is divisible by $2$, and that means that either $i$ or $j$ is divisible by $2â‹…2_{p_{i}}=2_{p_{i}+1}$, which cannot be true because of definition of $p_{i}$.

There are not more than $20$ different letters that the $i$-th one sees. All the letters that the $i$-th one sees are different. That means that for every letter there are at most $20$ letters that it sees that are greater than or equal to it. That is what we needed to prove.