not a recurrent neural network

For everything that's not in any way related to PureBasic. General chat etc...
User avatar
idle
Always Here
Always Here
Posts: 6095
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

not a recurrent neural network

Post by idle »

An experiment of mashing a recurrent neural network into a trie.
Training time none, output generated in ms
At the moment the output is totally deterministic, there's no randomness even if it looks like it.

This is generated from 10 Shakespeare sonnets. Prompt set to generate up to 300 words
The output is essentially a mix of the input, it branches when it encounters multiple choices on predicted tokens using the recurrent hidden state memory from the generated output to predict the next token. So the result is a mash up of the input based on the probability of the relative position of the generated document.

prompt "Thou art"
thou art thyself thy beauty’s legacy
nature’s bequest gives nothing but doth lend
And being frank she lends To those are free
then beauteous niggard why dost thou spend
upon thyself thy beauty’s legacy
nature’s bequest gives nothing but doth lend
And being frank she lends To those are free
then beauteous niggard why dost thou spend
upon thyself thy beauty’s legacy
nature’s bequest gives nothing but doth lend
And being frank she lends To those are free
then beauteous niggard why dost thou abuse
the bounteous largess given thee To give
profitless usurer why dost thou abuse
the bounteous largess given thee To give
profitless usurer why dost thou use
so great a sum of sums yet canst Not live
For having traffic With thyself alone
thou of thyself thy sweet self dost deceive
then how when nature calls thee To be single And thine image dies
prompt "thy Self"
thy self And tell the face thou viewest
now is the time that face should form another
whose fresh repair If now thou Not renewest
thou dost beguile the world, unbless some mother
For where is she so fair whose uneared womb
disdains the tillage of thy lusty days
To say, within thine own bright eyes
feed’st thy light’s flame With self substantial fuel
making a famine where abundance lies
thyself thy foe, To thy sweet self dost deceive
then how when nature calls thee To give
profitless usurer why dost thou abuse
the bounteous largess given thee To give
profitless usurer why dost thou use
so great a sum of sums yet canst Not live
For having traffic With thyself alone
thou of thyself thy sweet self dost deceive
then how when nature calls thee To be gone
what acceptable audit canst thou leave
thy unused beauty must be tombed With thee
which used lives th’ executor
prompt "Dig Deep"
dig deep trenches in thy glass And tell the face thou viewest
now is the time that face should form another
whose fresh repair If now thou Not renewest
thou dost beguile the world, unbless some mother
For where is she so fair whose uneared womb
disdains the tillage of thy lusty days
To say, within thine own deep sunken eyes
were an all-eating shame, And thriftless praise
how much more praise deserv’d thy beauty’s legacy
nature’s bequest gives nothing but doth lend
And being frank she lends To those are free
then beauteous niggard why dost thou abuse
the bounteous largess given thee To give
profitless usurer why dost thou use
so great a sum of sums yet canst Not live
For having traffic With thyself alone
thou of thyself thy sweet self dost deceive
then how when nature calls thee To be single And thine image dies


prompt "thy beauty"
thy beauty and tell the face thou viewest
now is the time that face should form another
whose fresh repair If now thou not renewest
thou dost beguile the world, unbless some mother
For where is she so fair whose uneared womb
disdains the tillage of thy lusty days
To say, within thine own bright eyes
feed’st thy light’s flame With self substantial fuel
making a famine where abundance lies
thyself thy foe, to thy sweet self dost deceive
then how when nature calls thee to give
profitless usurer why dost thou abuse
the bounteous largess given thee to give
profitless usurer why dost thou use
so great a sum of sums yet canst not live
For having traffic with thyself alone
thou of thyself thy sweet self dost deceive
then how when nature calls thee to be gone
what acceptable audit canst thou leave
thy unused beauty must be tombed With thee
which used lives th’ executor
If you tried to train an RNN to output this you'd be training it for hours or days and even then it'd be total gibberish.

Ingested 876 tokens
trie size 89,640 bytes
sample size 6,214 bytes
Ratio 14.43

I'm not sure what to think of it but it sure beats training an RNN for useless generative twaddle!
miso
Enthusiast
Enthusiast
Posts: 541
Joined: Sat Oct 21, 2023 4:06 pm
Location: Hungary

Re: not a recurrent neural network

Post by miso »

Cool. Is it Markov Chain? I used this for people/city name generators. (elvish, dwarvish, nationality style gibberish names for fantasy, etc. Works well for those too.
User avatar
idle
Always Here
Always Here
Posts: 6095
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: not a recurrent neural network

Post by idle »

miso wrote: Wed Nov 26, 2025 6:41 am Cool. Is it Markov Chain? I used this for people/city name generators. (elvish, dwarvish, nationality style gibberish names for fantasy, etc. Works well for those too.
No there is no Markov chain, the weirdness comes from the windowed context vector which updates at each prediction, so uses previous state memory

I have added positional encoding but it's not actually being used yet.

Shakespeare sounds so weird it's not really hard for it to makes some semblance of sense.
No idea how it would work for elvish...

If I add spectral hashing it could be used to do some pretty neat code complete from you own embedded sources. Spectral hash then can distinguish contexts like keywords operators variables ..

I can't find any references to it a structure, it's just a mad beast.
User avatar
Piero
Addict
Addict
Posts: 1128
Joined: Sat Apr 29, 2023 6:04 pm
Location: Italy

Re: not a recurrent neural network

Post by Piero »

Dear idle,
I AM a recurrent neural network!

Demonstration: :ok: 👍
User avatar
Piero
Addict
Addict
Posts: 1128
Joined: Sat Apr 29, 2023 6:04 pm
Location: Italy

Re: not a recurrent neural network

Post by Piero »

PS:
If you think that a "trained database" can be "intelligent" then you can eat this pineapple pizza here waiting for you
User avatar
idle
Always Here
Always Here
Posts: 6095
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: not a recurrent neural network

Post by idle »

Piero wrote: Mon Dec 01, 2025 9:08 pm Dear idle,
I AM a recurrent neural network!

Demonstration: :ok: 👍
I said it's Not a RNN. It's conceptually using the hidden state of an RNN as a windowed context vector. The data is stored the trie as tri grams so it can probabilistically choose which token to output in the current context.
User avatar
Piero
Addict
Addict
Posts: 1128
Joined: Sat Apr 29, 2023 6:04 pm
Location: Italy

Re: not a recurrent neural network

Post by Piero »

idle wrote: Mon Dec 01, 2025 10:43 pmI said it's Not a RNN. It's conceptually using the hidden state of an RNN as a windowed context vector. The data is stored the trie as tri grams so it can probabilistically choose which token to output in the current context.
My english is poor, and I bet you wouldn't want to hear my answer in the current context :P
User avatar
Piero
Addict
Addict
Posts: 1128
Joined: Sat Apr 29, 2023 6:04 pm
Location: Italy

Re: not a recurrent neural network

Post by Piero »

PS:
these things make me remember of "genetic" algorithms (I'm not sure if "genetic" was the name) 94372675 years ago…

Edit/PPS:
please hear this (uncontaminated by pineapple pizzas) Italian:
it's just faster machines; the theory was estabilished LONG time ago…
…so?
User avatar
idle
Always Here
Always Here
Posts: 6095
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: not a recurrent neural network

Post by idle »

The point is it doesn't need training which is the expensive time consuming process of an RNN. Nor does it rely on the Markov assumption in that the future state depends on the preceding n-windowed hidden vector state, the hidden layer as such of an RNN so it can differentiate and predict what comes next from the context of the document your working on.
User avatar
Piero
Addict
Addict
Posts: 1128
Joined: Sat Apr 29, 2023 6:04 pm
Location: Italy

Re: not a recurrent neural network

Post by Piero »

idle wrote: Tue Dec 02, 2025 2:09 amit can differentiate and predict what comes next from the context of the document your working on.
Trained on the context, you mean?
Quindi dovrebbe esibirsi in rimarchevoli traduzioni, financo il contesto sia sfuggevole, poeticamente espresso

Edit:
TBH, I must admit my well trained "mac" translated the above decently well :shock: :lol:
User avatar
idle
Always Here
Always Here
Posts: 6095
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: not a recurrent neural network

Post by idle »

Yes it has a sliding window on the document your working on which influences the score of which token comes next.
It's an adaptation of the hidden layer so it updates on every token.
I'll revisit it sometime soon, I'm distracted by preparing for Xmas invasion.
threedslider
Enthusiast
Enthusiast
Posts: 466
Joined: Sat Feb 12, 2022 7:15 pm

Re: not a recurrent neural network

Post by threedslider »

Are you working something on LLM in PB ?
miso
Enthusiast
Enthusiast
Posts: 541
Joined: Sat Oct 21, 2023 4:06 pm
Location: Hungary

Re: not a recurrent neural network

Post by miso »

I think these last RNN an non RNN projects of Idle are experiments for code autocomplete/predictor AI-s. As a guess. Or they are just for fun.
User avatar
idle
Always Here
Always Here
Posts: 6095
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: not a recurrent neural network

Post by idle »

threedslider wrote: Fri Dec 05, 2025 8:08 pm Are you working something on LLM in PB ?
I was looking at what led to LLM architectures as a progression from recurrent neural networks and trying to work out why it's advantageous over more classical methods of NLP (natural language processing) for generative prediction.
on the face of it your interested in what word comes next, so doesn't the input define the probabilities themselves and if you can capture that in a trie, why wouldn't you use one?
with a trie
1) you can regurgitate the input exactly or probabilistically if you wish
2) there is no time expense of training the model, you just load the documents
3) inference is O(n) on a subspace (shared prefix's) from an O(k) lookup

using tri grams of the input will always guarantees a word follows a word if the words are a,b,c,d,e its encoded as a,b,c : b,c,d : c,d,e so you can look up b,c and that will pick d then you look up c,d and that will pick e
you can add positional context on a key a@1:1 a@1:10 a@2:1 a@2:10 so you can use that to recover the input sequence exactly if you choose.
Abstracting it to behave like an RNN, instead uses embeddings of the tokens along with capturing positional information into a vector. it's not currently using the positional information encoded in the trie directly for predictions but it does filter the list of candidates from it so enumerating "a b" would return a list of tokens at a b c@1:1 a b c@1:10 a b d@2:1 a b d@2:11 context is then gained by looking at the embeddings of the tokens against the document context your working on
so it's using the hidden state to rank the next token. I have no doubt that it works but it needs work.
and yes it's just for fun an idle curiosity but it does have potential for making a smart Autocomplete
Post Reply