If you are up to it, a more full-feature version of the WebBrowser.pb program would be really nice. This program is good and simple as far as it goes, but you change the URL it is suppose to visit, then what it doesn't have or do really shows up.Joakim Christiansen wrote:I just finished the next beta of my internet tv program.
I'm currently very lazy and not programming much lately, but I have like 1000 unfinished projects.
Anybody have a good ide for a program/game I can make?
For instance: No cookie support, so sites don't remember you, don't keep track of what page you are on, or anything like that. You have to allow cookies if you want to have a complete web experience.
Other sites and forms come up okay with the example browser, which shows real attention to detail in PureBasic. But you don't have download, upload, zoom, search, home, or tab capabilities included. You may not need all these, but it's worth pointing out the differences.
It would almost make sense to just work with existing browser APIs, but I looked into this a bit, and it gets complicated real fast. You get into issues related to why you want or need to do this, and there may be licenses or fees involved as well. For Firefox, they see it as wanting to write an add-in or something, For Chromium, it is using their libraries and APIs to write your own browser, which is what Google's Chrome, Opera's Opera, and FlashPeaks' Slimjet all rely on now. The name is Blink. It used to be WebKit.
I'd really like to see cookies supported in PureBasic. Then see how far that takes me. I could maybe get by with just that. But actually I don't know enough to say for sure.
In my years of using and working with computers, I find that the networking involved, especially with the internet, accounts for a huge part of what we do with them now. Take just things related to socializing (email, Facebook, Twitter, blogs, forums), checking out the current news in detail, or conducting searches for more information on just about any subject, and it is an entirely different world from when I was younger. Back then I might go to a public library once every two weeks, read some articles, check out some books, and do most of my school research using outdated sets of encyclopedias. The difference is almost beyond description.
The three hardest things for me to deal with when networking are:
(1) Finding time to do it all
(2) Getting the wording right so the searches produce real results
(3) Separating out what is trash, outmoded, doubtful, or false from the good stuff
It can take days of searching before you get results or realize that the information just isn't out there to be found. That's wasted time, and that's never good. The only cure I can see is: Get the computer to do more on its own, with less time spent making it happen on our part.
Been busy today, but had time to see if I could find anything out about how browsers handle cookies. Now cookies originate as request from the server side, because servers can be undulated by cuser/clients on their PCs, and besides, each time you return you could actually be dealing with a different server in a multi-server environment. So the idea is, the cookie from a site represents all that site needs to know about you, like who you are, your shopping cart, where you left off the last time you visited, even what page you were on if it is presenting multiple page search results. So by convention, this history is stored in files known as cookies on the user's PC.
When you wipe your browser history's, the cookies for that one browser are all deleted. Each browser keeps its cookies in some special place on your hard drive.
Here is a link I found that gets into how you can handle cookies using Java APIs. I don't know how existing browsers do it, but the process should be similar. In effect, the requests come from the server, and the client's browser does what is asked of it. That is, if you have enabled the use of cookies as part of the browser's setup. The link is: http://stackoverflow.com/questions/4907 ... le-cookies