GPT4All - The better AI chatbot?

For everything that's not in any way related to PureBasic. General chat etc...
User avatar
idle
Always Here
Always Here
Posts: 5042
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: GPT4All - The better AI chatbot?

Post by idle »

We might be better off with a different type of model all together. Masked word models can learn unsupervised and the compiler could be used to teach it to do code completion and porting between pb and c. Can this be achieved with the models supported by gpt4all, I'm not sure but it'll be interesting to try.
Bitblazer
Enthusiast
Enthusiast
Posts: 733
Joined: Mon Apr 10, 2017 6:17 pm
Location: Germany
Contact:

Re: GPT4All - The better AI chatbot?

Post by Bitblazer »

working link for GPT4All discord.

another rentable Cloud GPU service
webpage - discord chat links -> purebasic GPT4All
User avatar
idle
Always Here
Always Here
Posts: 5042
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: GPT4All - The better AI chatbot?

Post by idle »

quick hack of web chat bot

you need to download a model gpt4all model the default one is "ggml-gpt4all-j-v1.3-groovy.bin"
and change the path on line 170 to point to it

path.s = "C:\Users\idle\AppData\Local\nomic.ai\GPT4All\ggml-gpt4all-j-v1.3-groovy.bin

you will also need the lib
https://dnscope.io/idlefiles/gpt4all_pb.zip

it can't do multiple clients the c lib needs adapting so the call backs can accept a user data pointer.

Code: Select all

Structure llmodel_error 
   *message;            // Human readable error description; Thread-local; guaranteed to survive until next llmodel C API call
   code.l;              // errno; 0 if none
EndStructure ;

Structure arFloat 
  e.f[0] 
EndStructure 
Structure arLong 
  e.l[0] 
EndStructure   

Structure llmodel_prompt_context Align #PB_Structure_AlignC     
    *logits.arfloat;      // logits of current context
    logits_size.i;        // the size of the raw logits vector
    *tokens.arlong;       // current tokens in the context window
    tokens_size.i;        // the size of the raw tokens vector
    n_past.l;             // number of tokens in past conversation
    n_ctx.l;              // number of tokens possible in context window
    n_predict.l;          // number of tokens to predict
    top_k.l;              // top k logits to sample from
    top_p.f;              // nucleus sampling probability threshold
    temp.f;               // temperature to adjust model's output distribution
    n_batch.l;            // number of predictions to generate in parallel
    repeat_penalty.f;     // penalty factor for repeated tokens
    repeat_last_n.f;      // last n tokens to penalize
    context_erase.f;      // percent of context to erase if we exceed the context window
EndStructure 

PrototypeC llmodel_prompt_callback(token_id.l);
PrototypeC llmodel_response_callback(token_id.l,*response); 
PrototypeC llmodel_recalculate_callback(is_recalculating.l);

ImportC  "libllmodel.dll.a" 
  llmodel_model_create(model_path.p-utf8);
  llmodel_model_create2(model_path.p-utf8,build_variant.p-utf8,*error.llmodel_error); 
  llmodel_model_destroy(model.i);
  llmodel_loadModel(model.i,model_path.p-utf8);
  llmodel_isModelLoaded(model.i);
  llmodel_get_state_size(model.i);
  llmodel_save_state_data(model.i,*dest.Ascii);
  llmodel_restore_state_data(model.i,*src.Ascii);
  llmodel_prompt(model,prompt.p-utf8,*prompt_callback,*response_callback,*recalculate_callback,*ctx.llmodel_prompt_context);
  llmodel_setThreadCount(model,n_threads.l);
  llmodel_threadCount.l(model.i);
  llmodel_set_implementation_search_path(path.p-utf8);
  llmodel_get_implementation_search_path();string 
EndImport 

EnableExplicit 

Global gModel,gCtx.llmodel_prompt_context
Global gErr.llmodel_error
Global path.s,gResponce.s,Quit 

OpenConsole()  

ProcedureC CBResponse(token_id.l,*response);   
    
  Print(PeekS(*response,-1,#PB_UTF8))  
    
  ProcedureReturn #True   
  
EndProcedure 

ProcedureC CBNetResponse(token_id.l,*response);   
  
  gResponce + PeekS(*response,-1,#PB_UTF8)
  
  ProcedureReturn #True   
  
EndProcedure 

ProcedureC CBPrompt(token.l) 
    
    ProcedureReturn #True 
    
EndProcedure   

ProcedureC CBRecalc(is_recalculating.l) 
  
   ProcedureReturn is_recalculating
  
EndProcedure   

Procedure Process(client) 
  Protected length,*buffer,*mem,res,pos,input.s,prompt.s,content.s
  
  *buffer = AllocateMemory($FFFF) 
  length = ReceiveNetworkData(client,*buffer,$ffff)
  
  If PeekS(*buffer,3,#PB_UTF8) = "GET"  
    
    input.s = PeekS(*buffer+5,-1,#PB_UTF8) 
    pos = FindString(input,"HTTP/1.1")-1 
    input = Left(input,pos)  
    input = ReplaceString(input,"%20"," ") 
    input = ReplaceString(input,"+"," ") 
    pos = FindString(input,"query?fquery=")
    If pos 
      input = Right(input,Len(input)-(pos+12)) 
          
      PrintN("input")
      PrintN(input)  
      PrintN("end input") 
      
      prompt.s = "### Human " + input + " ### Assistant"
      llmodel_prompt(gmodel,prompt,@CBPrompt(),@CBNetResponse(),@CBRecalc(),@gctx); 
      
    EndIf 
    
    content.s = "<!DOCTYPE html>" +#CRLF$
    content + "<html><head>" +#CRLF$
    content + "<title>Gpt4all PB</title>" +#CRLF$
    content + "</head><body>" + #CRLF$
    content + "<form action='/query'>" + #CRLF$
    content + "<label for='fquery'>Query:</label><br>" + #CRLF$
    content + "<input type='text' id='fquery' name='fquery' value=''><br>" + #CRLF$
    content + "<input type='submit' value='submit'>" + #CRLF$
    content + "<p>" + gResponce + "</p>" + #CRLF$
    content + "</body></html>" 
    
    *mem = UTF8(content) 
    
    Length = PokeS(*Buffer, "HTTP/1.1 200 OK"+#CRLF$, -1, #PB_UTF8)                     
    Length + PokeS(*Buffer+Length, "Date: Wed, 11 Fec 2017 11:15:43 GMT"+#CRLF$, -1, #PB_UTF8) 
    Length + PokeS(*Buffer+Length, "Server: Atomic Web Server 0.2b"+#CRLF$, -1, #PB_UTF8)     
    Length + PokeS(*Buffer+Length, "Content-Length: "+Str(MemorySize(*mem))+#CRLF$, -1, #PB_UTF8)  
    Length + PokeS(*Buffer+Length, "Content-Type:  text/html"+#CRLF$, -1, #PB_UTF8)           
    Length + PokeS(*Buffer+Length, #CRLF$, -1, #PB_UTF8)                                      
    CopyMemory(*mem,*buffer+Length,MemorySize(*mem)) 
    Length + MemorySize(*mem)
    FreeMemory(*mem)  
    
    SendNetworkData(client,*buffer,length) 
       
    PrintN("output")
    PrintN(gResponce) 
       
    CloseNetworkConnection(client) 
    
    gResponce = ""
    
  EndIf  
  
  FreeMemory(*buffer)     
  
EndProcedure     

; = { "### Instruction", "### Prompt", "### Response", "### Human", "### Assistant", "### Context" };

;### Instruction:
;%1
;### Input:
;You are an Elf.
;### Response:

gctx\n_ctx = 2048   ;maxiumum number of tokens in context windows 
gctx\n_predict = 128  ;number of tokens to predict 
gctx\top_k = 40       ;top k logits  
gctx\top_p = 0.965    ;nuclus sampling probabnility threshold  
gctx\temp = 0.28      ;temperature to adjust model's output distribution
gctx\n_batch = 12     ;number of predictions to generate in parallel   
gctx\repeat_penalty = 1.2  ;penalty factor for repeated tokens
gctx\repeat_last_n = 10    ;last n tokens To penalize 
gctx\context_erase = 0.5   ; percent of context to erase if we exceed the context window

Global Event,ServerEvent,ClientID,gbusy 

path.s = "C:\Users\OEM\AppData\Local\nomic.ai\GPT4All\ggml-gpt4all-j-v1.3-groovy.bin"
gmodel =  llmodel_model_create2(path,"auto",@gerr);
If gmodel  
  llmodel_setThreadCount(gmodel,12)    
  If llmodel_loadModel(gmodel,path)
    If CreateNetworkServer(0,80, #PB_Network_IPv4 | #PB_Network_TCP);
      OpenWindow(0, 100, 200, 320, 50, "gtp4all")
      Repeat 
        Repeat
          Event = WaitWindowEvent(20)
          Select Event 
            Case #PB_Event_CloseWindow 
              Quit = 1 
            Case #PB_Event_Gadget
              If EventGadget() = 0
                Quit = 1
              EndIf
          EndSelect
        Until Event = 0
        
        ServerEvent = NetworkServerEvent()
        If ServerEvent
          ClientID = EventClient()
          Select ServerEvent
            Case #PB_NetworkEvent_Connect 
              If gbusy = 0 
                gbusy = 1
              EndIf   
            Case #PB_NetworkEvent_Data 
              If gbusy
                Process(clientid) 
                gbusy=0 
              EndIf   
          EndSelect  
        EndIf   
        Until Quit = 1
      CloseNetworkServer(0)
    EndIf         
    llmodel_model_destroy(gModel) 
  EndIf 
EndIf 

Bitblazer
Enthusiast
Enthusiast
Posts: 733
Joined: Mon Apr 10, 2017 6:17 pm
Location: Germany
Contact:

Re: GPT4All - The better AI chatbot?

Post by Bitblazer »

BarryG wrote: Sun Jun 04, 2023 10:09 pm[Edit] Not using it; didn't realise it needed to download 3 GB+ of data.
There are distilled versions of the BERT foundation model. For example DistilBERT.

Hugging Face is your friend for any model search.
webpage - discord chat links -> purebasic GPT4All
User avatar
Caronte3D
Addict
Addict
Posts: 1027
Joined: Fri Jan 22, 2016 5:33 pm
Location: Some Universe

Re: GPT4All - The better AI chatbot?

Post by Caronte3D »

idle wrote: Wed Jun 07, 2023 6:25 am quick hack of web chat bot
Hi, I only followed this post, but get polink errors mostly because I have no idea what I'm doing :D
Can someone make a simple step to step guide?

What I did was:

uncompress the zip library on a folder
Download a model and put it on the same folder
Save the last source IDLE posted (the server one).
Change the path (line 170)
Try to compile it.

And I get this errors:
https://prnt.sc/lVItg8XI0sAS

EDIT:
With ASM backend compiles without errors, but when I try to launch the exe, a CMD window open and then close without nothing more.
Any help appreciate :wink:
Last edited by Caronte3D on Mon Jun 12, 2023 5:00 am, edited 1 time in total.
User avatar
idle
Always Here
Always Here
Posts: 5042
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: GPT4All - The better AI chatbot?

Post by idle »

It's x64 only I will have another look on a clean machine to see if it's a missing c++ dependency.
Bitblazer
Enthusiast
Enthusiast
Posts: 733
Joined: Mon Apr 10, 2017 6:17 pm
Location: Germany
Contact:

Re: GPT4All - The better AI chatbot?

Post by Bitblazer »

The next gpt4all release will be able to do python and C++ programs on demand.

There is a current request for people who want other programming languages supported!
webpage - discord chat links -> purebasic GPT4All
User avatar
Caronte3D
Addict
Addict
Posts: 1027
Joined: Fri Jan 22, 2016 5:33 pm
Location: Some Universe

Re: GPT4All - The better AI chatbot?

Post by Caronte3D »

idle wrote: Sun Jun 11, 2023 9:37 pm It's x64 only I will have another look on a clean machine to see if it's a missing c++ dependency.
I switched temporary to compile console mode on a try to see anything on the CMD window and thi's the output:
https://prnt.sc/VaWjf0OmPNZl
I have 16GB of RAM, could be a problem or it's enough? :?
User avatar
idle
Always Here
Always Here
Posts: 5042
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: GPT4All - The better AI chatbot?

Post by idle »

Thanks that's good, now goto 127.0.0 1 in web browser.
Also check number of threads.
User avatar
Caronte3D
Addict
Addict
Posts: 1027
Joined: Fri Jan 22, 2016 5:33 pm
Location: Some Universe

Re: GPT4All - The better AI chatbot?

Post by Caronte3D »

I can't go to browser, the program crash almost inmediatelly after launch
No window "gtp4all" is opened

EDIT:
If I put a "MessageRequester" after "llmodel_loadModel(gmodel,path)", the requester never popup, so the problem is anything related to that line.
User avatar
idle
Always Here
Always Here
Posts: 5042
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: GPT4All - The better AI chatbot?

Post by idle »

I hope it doesn't require a full install of the larger project that would be nuts.

Change the number of threads to suit the number of execution units you have, i have 12 and change the n batch field to suit as well.

gctx\n_batch = 12

There are no instructions unfortunately

Maybe create temporary exe in sorce folder and check threadsafe.

Also you can try changing the string to auto or default or avxonly.

I will check I haven't built the libs with native flags. It might be a processor thing. I'm on I5 with 12 threads
User avatar
idle
Always Here
Always Here
Posts: 5042
Joined: Fri Sep 21, 2007 5:52 am
Location: New Zealand

Re: GPT4All - The better AI chatbot?

Post by idle »

Caronte3D wrote: Mon Jun 12, 2023 11:43 am I can't go to browser, the program crash almost inmediatelly after launch
No window "gtp4all" is opened

EDIT:
If I put a "MessageRequester" after "llmodel_loadModel(gmodel,path)", the requester never popup, so the problem is anything related to that line.
I've rebuilt the libs and tried it again but it still crashed on the old PC, it did however get one step further.
https://dnscope.io/idlefiles/gpt4all_pb.zip
User avatar
Caronte3D
Addict
Addict
Posts: 1027
Joined: Fri Jan 22, 2016 5:33 pm
Location: Some Universe

Re: GPT4All - The better AI chatbot?

Post by Caronte3D »

Nice! Works now for me :D
It understand only English or depend of the model loaded?

EDIT:
Yes, can respond in another languages if you ask for it, but very bad :lol:
Bitblazer
Enthusiast
Enthusiast
Posts: 733
Joined: Mon Apr 10, 2017 6:17 pm
Location: Germany
Contact:

Re: GPT4All - The better AI chatbot?

Post by Bitblazer »

Caronte3D wrote: Tue Jun 13, 2023 9:36 am Nice! Works now for me :D
It understand only English or depend of the model loaded?

EDIT:
Yes, can respond in another languages if you ask for it, but very bad :lol:
In general, the AI can talk any language you teach it ;)
So some models can process 40+ languages, but english is currently the best supported language, so i always stick to english.

It really helps to learn something about how the technology actually works. It is amazing, but also very limited "in a way", while in other aspects it should be perfect (but isn't ...).

The dataset outside the US culture is rather bad. Basically just like the "internet" (because that's where the AI learned from).
webpage - discord chat links -> purebasic GPT4All
User avatar
Caronte3D
Addict
Addict
Posts: 1027
Joined: Fri Jan 22, 2016 5:33 pm
Location: Some Universe

Re: GPT4All - The better AI chatbot?

Post by Caronte3D »

My goal is give a document manual and let it to answer questions about it.
I think this way would be easiest
I don't know where to start, but I will search about.
Post Reply