Every content word in English is replaced by another word of the same part of speech in a closed loop. Nouns only swap with nouns, verbs with verbs, adjectives with adjectives, adverbs with adverbs. Function words — articles, prepositions, conjunctions, pronouns, and auxiliary verbs like "the", "of", "they're" — always pass through unchanged.
Each POS list is arranged in a loop using a hash map. Each word points to the next one, and the last word wraps
back to the first via the modulo operator (%). So if the noun list is
[apple, boat, chair], then apple→boat, boat→chair, chair→apple. Decoding simply inverts the map —
swap every key and value.
Words that appear in multiple parts of speech (like "run" which is both a noun and verb) are no longer excluded — instead they get their own pool based on their exact combination of POS tags. "run" (n+v) only swaps with other words that are also exactly n+v. A word that is n+v+adj gets its own separate pool. Each unique combo is an independent rotation loop, just like single-POS pools. The combo key is the sorted POS characters joined together (e.g. "nv", "nva").
By default, words are shuffled into a random order before building the rotation loop, using a seeded pseudorandom number generator. A seed is just a number — the same seed always produces the same shuffle, so everyone using seed 67 speaks the same dialect. Change the seed and you get a completely different language. Each part of speech uses a different offset of the seed so their shuffles are independent.
The shuffle walks backwards through the word list. At each position i, it picks a random earlier
position j and swaps them. This guarantees every possible ordering is equally likely — no
clustering or bias.
Conjugated or inflected words like "jumped", "running", "dogs", or "biggest" aren't in the dictionary — only
base forms are. The translator strips common suffixes (-ed, -ing, -s,
-est, etc.) to find the base, translates that, then reattaches the original suffix. Longer suffixes
are tried first to avoid incorrect partial matches. If the stripped stem equals the original word, it's skipped
to prevent false matches like "competitiveest".
Apostrophe suffixes like 's, 're, 've, 'll are treated as
trailing punctuation by the tokenizer — they're stripped before lookup and reattached to the translated word. So
"dog's" becomes "[translated dog]'s". Trailing apostrophes for plural possessives like "wolves'" are handled the
same way.
Before translating, the text is split into tokens on whitespace. Each token is split into three parts: leading punctuation, the word, and trailing punctuation (including apostrophe suffixes). Only the word part is looked up — everything else reattaches unchanged. Line breaks are preserved in the output.
When enabled, words are only swapped with words of the same syllable count within their POS. The rotation loop is split into sub-loops — one per syllable count — so a 2-syllable noun only ever maps to another 2-syllable noun. Syllables are counted heuristically by counting vowel groups. Words that are alone in their bucket (no other word with that count) map to themselves and pass through.
All settings are encoded as a shareable key like 001167. Format is MYLS[seed]:
M=multi-word, Y=syllable match, L=word list (1/2), S=shuffle, then the seed number. Default:
001167. The key box always shows your current settings. Copy it to share a dialect, or paste a
valid key to instantly apply it. Invalid keys are rejected with a red flash; valid keys confirm with green.
⚠️ Keys with multi-word output on (3rd digit = 1) won't decode accurately.
After translation, "a" and "an" are automatically corrected based on the following word. If "a single" becomes "a able", the article is fixed to "an able". This runs as a post-process pass over the rendered output by walking the translated word spans.
1: Common (default) uses ~4,700 content words from the most common English words. 2: Full WordNet lazy-loads ~142k words for maximum coverage. 3: Cyclexicalisms is a special encode-only mode using the Unicyclist Dictionary. 4: Full Wiktionary lazy-loads ~850k words from Wiktionary (noun/verb/adj/adv). 5: Wiktionary + Names (800k) also includes proper nouns. In this mode, each English word is assigned a Cyclexicalism via a deterministic hash of the word and seed — same word and seed always produce the same output, but it cannot be decoded back. Changing the seed gives entirely different assignments.