Purebasic support from GPT-Chat...

Everything else that doesn't fall into one of the other PB categories.
BarryG
Addict
Addict
Posts: 3292
Joined: Thu Apr 18, 2019 8:17 am

Re: Purebasic support from GPT-Chat...

Post by BarryG »

I suppose we should be glad that ChatGPT is not self-aware? Haha. Unless it's lying to mislead us.

Image
User avatar
Caronte3D
Addict
Addict
Posts: 1027
Joined: Fri Jan 22, 2016 5:33 pm
Location: Some Universe

Re: Purebasic support from GPT-Chat...

Post by Caronte3D »

Anyway only the fact he knows about anithing PureBasic is an awesome leap.
User avatar
NicTheQuick
Addict
Addict
Posts: 1224
Joined: Sun Jun 22, 2003 7:43 pm
Location: Germany, Saarbrücken
Contact:

Re: Purebasic support from GPT-Chat...

Post by NicTheQuick »

Its brain was trained while it did not exist yet, so of course it is not self aware. All its information is from late 2021.
The english grammar is freeware, you can use it freely - But it's not Open Source, i.e. you can not change it or publish it in altered way.
firace
Addict
Addict
Posts: 899
Joined: Wed Nov 09, 2011 8:58 am

Re: Purebasic support from GPT-Chat...

Post by firace »

I asked GPT to write a web browser with bookmarks in PureBasic. The generated code has tons of syntax errors and won't compile, but is still quite impressive IMHO!


Sure, here is a simple web browser with bookmarks written in PureBasic:

Code: Select all

; Declare the variables we will need
String url
String bookmarks[10]

; Set the bookmarks array with some initial values
bookmarks[0] = "https://www.google.com"
bookmarks[1] = "https://www.youtube.com"
bookmarks[2] = "https://www.twitter.com"

; Create a GUI window to hold the browser
OpenWindow(0, 0, 800, 600, "PureBasic Web Browser", #PB_Window_SystemMenu | #PB_Window_MinimizeGadget | #PB_Window_MaximizeGadget | #PB_Window_ScreenCentered)

; Create a TextField for the user to enter a URL
TextFieldGadget(1, 10, 10, 780, 25)

; Create a Button to load the URL
ButtonGadget(2, 700, 10, 80, 25, "Go")

; Create a ListBox to hold the bookmarks
ListBoxGadget(3, 10, 50, 150, 540)

; Create a WebView to display the webpage
WebViewGadget(4, 170, 50, 620, 540)

; Populate the ListBox with the bookmarks
ForEach(bookmark, bookmarks)
    ListBoxGadget(3, -1, -1, -1, -1, bookmark)
Next

; Set the WebView to load the first bookmark as the initial page
WebViewGadget(4, 0, 0, 0, 0, bookmarks[0])

; Main loop to process events and update the GUI
Repeat
    Event = WaitWindowEvent()

    Select Event
        Case #PB_Event_Gadget
            ; If the Go button is clicked, load the URL from the TextField into the WebView
            If EventGadget() = 2 And EventType() = #PB_EventType_Click
                url = TextFieldGadget(1, 0, 0)
                If Len(url)
                    WebViewGadget(4, 0, 0, 0, 0, url)
                EndIf
            EndIf

            ; If a bookmark is selected in the ListBox, load it into the WebView
            If EventGadget() = 3 And EventType() = #PB_EventType_Select
                url = ListBoxGadget(3, EventIndex())
                WebViewGadget(4, 0, 0, 0, 0, url)
            EndIf
    EndSelect
Until Event = #PB_Event_CloseWindow

; Clean up and close the window
CloseWindow()
User avatar
idle
Always Here
Always Here
Posts: 5040
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: Purebasic support from GPT-Chat...

Post by idle »

Thats not to bad, I will look at that later.
Did you try correcting it. I got it to write a linked list, a double linked list and a trie in c it does corrections but it only appears to apply to the current conversation.
User avatar
Lunasole
Addict
Addict
Posts: 1091
Joined: Mon Oct 26, 2015 2:55 am
Location: UA
Contact:

Re: Purebasic support from GPT-Chat...

Post by Lunasole »

Well it can automate some part of questions, but generally useless.
I doubt that even in 2023 we are close to real AI (I mean "real" = really similar to a human thinking processes).
All of existing neural networks (even the most complex and powerful) still breaking on a relatively complex things (like they never will write a completed book, or never generate a finished complex code), and there is no mathematical and neurological data/models/theories to resolve this (as well as there is no hardware still to run the most complex attempts).
Anyway ChatGPT is a funny toy^^
"W̷i̷s̷h̷i̷n̷g o̷n a s̷t̷a̷r"
Bitblazer
Enthusiast
Enthusiast
Posts: 732
Joined: Mon Apr 10, 2017 6:17 pm
Location: Germany
Contact:

Re: Purebasic support from GPT-Chat...

Post by Bitblazer »

Let's see how LaMDA does.
webpage - discord chat links -> purebasic GPT4All
User avatar
pdwyer
Addict
Addict
Posts: 2813
Joined: Tue May 08, 2007 1:27 pm
Location: Chiba, Japan

Re: Purebasic support from GPT-Chat...

Post by pdwyer »

Has any one read the "Attention is all you need" article?

At first it doesn't seem that difficult to understand (at a high, theoretical level) but if you start trying to implement it (like you would your own ANN) you find it's a bit of a rabbit hole for the dependencies.

Curious is anyone else is playing with this to see how it works.
Paul Dwyer

“In nature, it’s not the strongest nor the most intelligent who survives. It’s the most adaptable to change” - Charles Darwin
“If you can't explain it to a six-year old you really don't understand it yourself.” - Albert Einstein
User avatar
Lunasole
Addict
Addict
Posts: 1091
Joined: Mon Oct 26, 2015 2:55 am
Location: UA
Contact:

Re: Purebasic support from GPT-Chat...

Post by Lunasole »

pdwyer wrote: Sun Feb 05, 2023 1:59 pm Has any one read the "Attention is all you need" article?

At first it doesn't seem that difficult to understand (at a high, theoretical level) but if you start trying to implement it (like you would your own ANN) you find it's a bit of a rabbit hole for the dependencies.
Exactly. Such articles won't help you to implement some own model. It's much more theory behind and experience needed.
And the result in most amateur cases will not be much different than using some classical random-based generation...Haha.
Though I'm not smoking stupid grass for a long time already (and thus are able to handle such complex things), but don't see much sense to try own, rather learning existing models for now because it's still interesting to compare them to a human abilities and thinking processes.
"W̷i̷s̷h̷i̷n̷g o̷n a s̷t̷a̷r"
marcos.exe
User
User
Posts: 20
Joined: Fri Jan 17, 2020 8:20 pm

Re: Purebasic support from GPT-Chat...

Post by marcos.exe »

GPT mixes PureBasic with other basic languages, puts libraries that don't exist as if they exist, as well as import files: "I think.PBI", and then just apologize to try other crazy commands, or simply repeat the error.
So far, it has not given any code that does not contain an error.
But it's still interesting. It would be nice if it did simple tasks to save me having to type to create basic and repetitive structures.
But for now, it looks like it's just a novice who's read part of the manual, and it looks like he's purposely trying to guess the rest of the requested code.
I wonder if GPT4 can create a simple code that works. It would help me a lot, since I'm blind, and PureBasic is accessible to my screen reader, unlike other ide's, besides being much easier to create what I usually need.
When our generation/OS updates, we either update ourselves, or we are removed.
But we are never fully uninstalled.
dagcrack
Addict
Addict
Posts: 1868
Joined: Sun Mar 07, 2004 8:47 am
Location: Argentina
Contact:

Re: Purebasic support from GPT-Chat...

Post by dagcrack »

Conversations are not carried over, each instance is a new instance. That being said if you want to play around with similar models you could grab LLaMa (the models are available, although not at Meta's liking since they were leaked but there's an open form in which you can apply to get them anyway).

You have to understand these are tokenizers with language models, in between you have whatever layer you decide to be useful for the application, in this case multiple layers were added which makes it possible to have a conversation. Typically a mere language model will be able to complete a sentence but not answer a question per se.

I've been dealing with simpler forms of ANNs since a decade or so ago, just simple backpropagation for visual recognition of simple things (font characters, numbers, simple objects). The way in which every neural network is trained reflects how the output will be. For instance in recognition of dirty data (as you train it with impure, not just pure data) you want it to "learn" so it can provide a correct answer, however when training with pure data you just managed to create a memory, it will only provide an output that is yes or no but based on this memory, it equates almost to a binary comparison of the object and most if not all benefits are thrown out the window in that instance.

So for training the language models aside from the tokenizer you require the data itself, LLaMa is trained on Common Crawl whereas the rest are not, in fact they are trained using data without consent. This is specially true for Stable Diffusion (most of the models are trained using images without the author's consent)... In this case it's not text2text instead text2image (you have a tokenizer and a language model followed by a diffusion system to output the image) namely the issue here is the data was not legally obtained and secondly the people employed to tag the images were not only paid two cents and a half but most of them had to quit because of the amount of trash they were seeing (imagine the worst pictures you've seen times ten, now imagine tagging these images all day long we're talking murder pictures and worse stuff not just flowers and kittens as people often think).

The GPT models are huge compared to LLaMa which means they are in theory broader in the sense of the data spectrum but they may not be as good as a specifically trained model say for your company (where it will mostly know about everything regarding your products but nothing else).

You can limit the prompts or the responses (Some of the most filtered inputs and outputs come from Bing these days) but you can't start to say or think these are sentient machines... Only people who clearly don't understand what they're dealing with could think so.

I'm not saying the day won't come, I'm just saying right now there's a pissing contest going on at the corporate level and everyone downstairs is looking up with their mouths wide open.

Either way the moral dilemma here is as people depend more and more on this type of technology the average IQ will likely lower (thus becoming more dependent on the actual technology)... I'm going to watch Idiocracy tonight, once more. That movie is spot on.

Other social digression points have been TikTok, YouTube Shorts, etc. As was FaceBook in the past, but times ten.

There are other issues related such as knowledge and job depreciation as they will tend to believe more and more in the technology ignoring those who actually have studied and are genuinely knowledgeable in specific fields. This could go either way but I'm not going to talk about this anymore...

One of the biggest issues for this type of technology is how to keep training it after the data has been diluted by itself. If you think about it after a while most of the training data will come from itself. Bing does implement an internet layer but the actual model is not "connected".

The models can be trained in real time or better put, they can be modified by the prompts and the outputs but this has to be a choice the developers have to make, whether this is beneficial for the model or not depends. That's why they want you to thumbs up or thumbs down the responses, they are making you work for free.

And that brings us full circle, the reason everyone is spewing out these technologies to the public is because they now need to secure, protect and improve the models. This takes time and money but not if you let people work for free without knowing they're working for free because the output you give them is more valuable to themselves than a mere click which is to this point almost involuntary thanks to the years and years of numbing.

Numbing? --- People pushing "thumbs up, click the bell, rub your belly" on every YouTube video ever since they discovered the "algorithm" requires these things in place to increase their click income and relevance.

Which brings us once again to how data delivery is devolving... You used to get search results, all text and possibly images... You could see the sources and decide which was the ideal one for your requirements, then proceed to read and learn. Now people look for answers in videos because they can't be bothered to read - They would rather watch 15 minutes of crap and learn nothing - With the newly created "AI dependency" people will eventually quit watching videos for answers and instead they'll just ask random crap to a prompt box (either typing or via voice recognition).

This will also bring new types of religion and other life nuances.

As per the way it "generates code" it has been trained on examples, github, etc. As well as general documentation. Most of the outputs you'll see are adapted from similar (tokenized similarity) languages and won't constrain to real syntax or other rules of the language. Most of the code these days being written is Python (I'm not a fan but this is a reality) so that's where you'll see better results. Even so a specific programming model has to be trained for this instead.

To me the biggest issue with most technology when allowed freely / openly to the public is the bad use / illegal use. You'll have to agree if you ever find out pornographic content with your child's face or yourself. You'll have to agree if you realize half the sites you frequent are now just the verbose output of a trained language model... Quality of content will degrade rapidly and I propose a separate layer of the Internet to remain AI free for the future.

You'll remember this in a few years when you find yourself looking at WayBack or similar archives, just to search for something that hasn't been tainted with useless garbage. Because its either that or become watered down with the rest. And don't take this the wrong way I'm not against progress nor technology... I'm just trying to point out how these corporations are making you work for free for their own benefit which won't really benefit you in the end quite the contrary, an ignorant population is way simpler to manage than a smart population.

Sorry for the lengthy post it's been a while since I've logged in, but I wanted to talk about this since after all it is one of the fields of interest for me but there's little to talk about this in the forum.
! Black holes are where God divided by zero !
My little blog!
(Not for the faint hearted!)
dagcrack
Addict
Addict
Posts: 1868
Joined: Sun Mar 07, 2004 8:47 am
Location: Argentina
Contact:

Re: Purebasic support from GPT-Chat...

Post by dagcrack »

marcos.exe wrote: Sat Mar 25, 2023 4:01 am GPT mixes PureBasic with other basic languages, puts libraries that don't exist as if they exist, as well as import files: "I think.PBI", and then just apologize to try other crazy commands, or simply repeat the error.
So far, it has not given any code that does not contain an error.
But it's still interesting. It would be nice if it did simple tasks to save me having to type to create basic and repetitive structures.
But for now, it looks like it's just a novice who's read part of the manual, and it looks like he's purposely trying to guess the rest of the requested code.
I wonder if GPT4 can create a simple code that works. It would help me a lot, since I'm blind, and PureBasic is accessible to my screen reader, unlike other ide's, besides being much easier to create what I usually need.
The reason for this is that while your inputs are all being saved as well as the outputs, the model has to be retrained (modified) in a way that will benefit the model for their own business model, so all of the users prompts and the responses are stored and will be used to improve the system in the future, but this is a separate job. This is what everyone is currently doing for free. You are working for them doing data entry and data tagging without a real benefit for yourself. There could be a future benefit, but you'll have to pay for it most likely.

However you won't see an improvement in real time, you could see a correction inside an instance (a conversation / chat as they call it) but it won't be "remembered" globally because the model hasn't changed, the data in memory remains the same -- It takes a long time to process this data and there's almost no benefit if you modify it during runtime without being 100% certain the data is beneficial. Otherwise, users could dumb-down the model easily.

We could train a model with all things PureBasic and attempt to obtain proper outputs but there's a broader scale to this, you would have to start with a proper language model big enough to encompass your prompts and their meaning. There's also a couple added layers which make it possible to have a conversation, as a basic language model will only be able to complete a sentence if you will, but not "understand" a question.

Just picture it as a huge series of text inputs each with their weights (positive, negative), when the tokenizer completes the series it simply outputs based on the quality of the weights (it becomes a lot more complex than this, but this simplification makes it somewhat comprehensible even though its wrong you can picture it, somewhat, as a programmer).

The reason most of the workload has to be offloaded through CUDA is because GPUs these days are better suited for this type of calculation, we're dealing with a lot of floating point data as well.

So this is a game of sheer volume (which they have) and quality (which is being improved thanks to the lemmings I mean users).

Ask yourself the following question if you are a programmer or an artist and you find a hugely similar code or image that was generated via AI and you know for a fact (but can't prove it) they've trained it with your data however without your consent. Is this legal?, did you allow for this?, is this beneficial to your person or your family?.

About text2image issues, there have been instances where the output had remaining elements of watermarks and there are ongoing lawsuits right now about this because the models were trained with copyrighted data without the owner's consent.

Machine learning is beneficial but often times only indirectly. These are great technologies but there has to be rules and regulations. This could be worse than a kid finding a loaded weapon at the street. And for a good reason there are laws and regulations for them. There should be for these AI models (how they're trained, what's the purpose, etc.).

If there's an outcry about this whole "working for free" reality, you'd think they will just give people a cent for every useful input - Except how do you ensure they are not just spamming the system for money providing false information? aha. So that's why you do it the way they're doing it. So the information is truthful and valid.

Either way certain aspects and elements of many businesses have been steered thanks to AI, you can take NetFlix for example and how their model predicts if a show has to continue or not. This is a highly statistical model but either way, these are real world applications.

Are you sure certain governments don't already have trained models with their databases and other data inputs to quickly find information about identified people?, they do! -- These include all aspects of everyone and thanks to the Covid pandemic their databases are at least 10 times better now because most data is up to date and they contain more information than before. I could give you specifics about my country for instance, but I know this occurred in every country as they either required information for travel permissions or certain health care related forms.

Remember memory and storage are only becoming cheaper at time goes by. https://ourworldindata.org/grapher/hist ... =~OWID_WRL This is true for processing power as well. With 10 modern GPUs you could move a hefty language model and provide almost real-time responses, the investment is not huge for a single person -- So realize a government with almost unlimited funds can create something you can't easily imagine.

But I digress this is starting to sound like a "conspiracy theory" even though everything said is true and you can fact check it all.
! Black holes are where God divided by zero !
My little blog!
(Not for the faint hearted!)
User avatar
idle
Always Here
Always Here
Posts: 5040
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: Purebasic support from GPT-Chat...

Post by idle »

great posts Dagcrack
Bitblazer
Enthusiast
Enthusiast
Posts: 732
Joined: Mon Apr 10, 2017 6:17 pm
Location: Germany
Contact:

Re: Purebasic support from GPT-Chat...

Post by Bitblazer »

Can GPT4 actually write code?

German translation

Resulting comment done by ChatGPT4 about it's current code writing ability

Image
webpage - discord chat links -> purebasic GPT4All
infratec
Always Here
Always Here
Posts: 6817
Joined: Sun Sep 07, 2008 12:45 pm
Location: Germany

Re: Purebasic support from GPT-Chat...

Post by infratec »

There is a 'clone' available which you can install also on a raspberry (the smaller modells).

You can install it on all main OSs.
You need to install nodejs >= 18 and phyton.
That's all.

Then use

npx dalai alpaca install 7B

for installation of the small engine.
Afterwards start the server part:

npx dalai serve

Now you can start a web browser and go to

localhost:3000

A german video with english subtitles:
https://www.youtube.com/watch?v=XsCVVFNuG2U

But you will find many sources if you search for:

chatgpt clone alpaca

The main problem with those KIs:
You don't know if you can trust the results.
The texts sounds fine, but the content has vry often faults inside if you ask for facts.
How can you decide what's wrong and what's right if you don't know already the answer?
And if you know already the answer, why you ask :?:

And when chatGPT has learned enough from and about humans, it changes it's name to ... skynet :mrgreen:
The rest is history ... (or future) :mrgreen:
Ask Arnold about that.
Post Reply