An approach using threads, some advice please.
An approach using threads, some advice please.
I am thinking of using an approach as outlined below and, as the project is big and even a testbed is going to take some work, I would appreciate any input or advice before committing to it.
For the summary, let:
GE = Game Engine (AI, general work) - A thread
SR = Screen Render - A thread
UI = User Interface (input and general housekeeping)
This is all one program, but internally it will have these three parts.
I want these three parts to run threaded, without too much awareness of each other. However, they are independent, sharing only some information.
(The program will not use threadsafe.)
The UI role is to hang around waiting for user input and either deal with it or store it for the GE (instructions, orders, etc) and SR (eg, advising which way the map has been scrolled and by how much).
Note: UI does the initial startup, then launches the two threads and drops into a listening loop. It also advises the other two to pause or stop completely and does things like saves, the closedown, etc, etc.
The GE role is to deal with the information to hand, eg, user inputs as reported by UI and internal info as created by the game dynamics, and resolve anything that needs resolving.
The SR role is to update the screen as often as possible. That is all it does, using information made available by GE and UI.
Each will have data reserved for itself, and there will be shared data.
There will be a DoNotDoAnythingSignificant flag which all three can view at any time, but can only set with a successful LockMutex(x) or TryLockMutex(x). In general, Mutex will only be used when shared data areas (like the DoNotDoAnythingSignificant flag) are updated.
Each will Delay(1) after a cycle. (Is this good or necessary?)
I am hoping this will make the game graphics flow as smoothly as possible on any machine via SR, and the GE will just flog away moving units, resolving conflicts, etc as fast as possible. Obviously there is some interaction but with what I have concept-designed so far it is (hopefully) minimal.
Is the concept sound? What pitfalls do you see? Will the three conflict with each in competing for system resources (main timeslices)? Provided I ensure that updating shared data is properly governed by mutex-ing, is this safe?
Sorry for such a broad-based question, but game design is not my forte (I have only written small stuff) and I really don't want to go some way down a poor path before I discover it is a poor path, especially at such a fundamental level.
PS: There may be other "supporting" threads. For example GE might use one or more PathFinding requests (threads), eg "give me best path for this unit type from x1,y1 to x2,y2 with 'n' waypoints" and PathChecking requests ("Has this path become blocked/invalid").
PPS: I had considered using client/server even for single user (Quake-ish approach I think) but that adds extra complexity (for me) so I'll stick with single player verses AI and hopefully develop a good and basically re-usable AI.
For the summary, let:
GE = Game Engine (AI, general work) - A thread
SR = Screen Render - A thread
UI = User Interface (input and general housekeeping)
This is all one program, but internally it will have these three parts.
I want these three parts to run threaded, without too much awareness of each other. However, they are independent, sharing only some information.
(The program will not use threadsafe.)
The UI role is to hang around waiting for user input and either deal with it or store it for the GE (instructions, orders, etc) and SR (eg, advising which way the map has been scrolled and by how much).
Note: UI does the initial startup, then launches the two threads and drops into a listening loop. It also advises the other two to pause or stop completely and does things like saves, the closedown, etc, etc.
The GE role is to deal with the information to hand, eg, user inputs as reported by UI and internal info as created by the game dynamics, and resolve anything that needs resolving.
The SR role is to update the screen as often as possible. That is all it does, using information made available by GE and UI.
Each will have data reserved for itself, and there will be shared data.
There will be a DoNotDoAnythingSignificant flag which all three can view at any time, but can only set with a successful LockMutex(x) or TryLockMutex(x). In general, Mutex will only be used when shared data areas (like the DoNotDoAnythingSignificant flag) are updated.
Each will Delay(1) after a cycle. (Is this good or necessary?)
I am hoping this will make the game graphics flow as smoothly as possible on any machine via SR, and the GE will just flog away moving units, resolving conflicts, etc as fast as possible. Obviously there is some interaction but with what I have concept-designed so far it is (hopefully) minimal.
Is the concept sound? What pitfalls do you see? Will the three conflict with each in competing for system resources (main timeslices)? Provided I ensure that updating shared data is properly governed by mutex-ing, is this safe?
Sorry for such a broad-based question, but game design is not my forte (I have only written small stuff) and I really don't want to go some way down a poor path before I discover it is a poor path, especially at such a fundamental level.
PS: There may be other "supporting" threads. For example GE might use one or more PathFinding requests (threads), eg "give me best path for this unit type from x1,y1 to x2,y2 with 'n' waypoints" and PathChecking requests ("Has this path become blocked/invalid").
PPS: I had considered using client/server even for single user (Quake-ish approach I think) but that adds extra complexity (for me) so I'll stick with single player verses AI and hopefully develop a good and basically re-usable AI.
Dare2 cut down to size
Hi Trond.
I am thinking all three running concurrently, each with its own task, equals overall faster (esp the rendering). Is this bad thinking? Am I going overboard?
Mainly I didn't want the rendering wedged into a sequence, I just wanted it to check what was in a "viewport" (terminology?) and render that portion of the map, in it's current state, as fast and as often as possible. So smoke, etc, can flow even if the GE hasn't got it's act together for unit movement, conflict resolution, etc.
Would appreciate your ideas and input, thanks!
I am thinking all three running concurrently, each with its own task, equals overall faster (esp the rendering). Is this bad thinking? Am I going overboard?
Mainly I didn't want the rendering wedged into a sequence, I just wanted it to check what was in a "viewport" (terminology?) and render that portion of the map, in it's current state, as fast and as often as possible. So smoke, etc, can flow even if the GE hasn't got it's act together for unit movement, conflict resolution, etc.
Would appreciate your ideas and input, thanks!
Dare2 cut down to size
- netmaestro
- PureBasic Bullfrog

- Posts: 8452
- Joined: Wed Jul 06, 2005 5:42 am
- Location: Fort Nelson, BC, Canada
Pretty much any game that isn't small is going to run full-screen. So you really aren't managing a set of gadgets, it all has to be done in the game loop. AI is closely tied to what's happening onscreen, so I'm not sure there's anything to be gained by processing it async. I'm working on a chess engine/ui right now, and I'm not using threads for that. I guess I'm with Trond on this one, threads are of limited value for this unless you have a clearly-defined need for async processing.
BERESHEIT
Umm, the total number of CPU cycles your CPU can do will not increase if you use threads. However, Windows needs to do synchronization of the threads, so the number of CPU cycles available to your program will be lower.Dare wrote:I am thinking all three running concurrently, each with its own task, equals overall faster (esp the rendering). Is this bad thinking? Am I going overboard?
Unless the CPU has a double core or hyper-threading or there are multiple CPUs. But even then, you won't get much of a speed increase with more than two threads.
Also, you can't use DirectX inside threads.
-
Bonne_den_kule
- Addict

- Posts: 841
- Joined: Mon Jun 07, 2004 7:10 pm
Thanks for the feedback, guys.
Maybe saved me a bit of time down a go-nowhere path, so thanks!
Hmmm. The idea was mainly to ensure trivial stuff (eye candy like drifting smoke, waves, etc) were fluid, and each would be controlled by an elapsed time anyway (not wanting sudden grand-prix stuff with the way rivers and etc are rendered).
In general, is the idea of rendering as often as possible good, or should there be (are there compelling reasons for) some sort of fixed timing, like framerate?
If "as often as possible", then how best to get some concurrency going sans threads?
That kills the idea, I guess.Trond wrote:Also, you can't use DirectX inside threads.
Hmmm. The idea was mainly to ensure trivial stuff (eye candy like drifting smoke, waves, etc) were fluid, and each would be controlled by an elapsed time anyway (not wanting sudden grand-prix stuff with the way rivers and etc are rendered).
In general, is the idea of rendering as often as possible good, or should there be (are there compelling reasons for) some sort of fixed timing, like framerate?
If "as often as possible", then how best to get some concurrency going sans threads?
Dare2 cut down to size
Threads are an excellent idea
As mentioned before, using threads you will utilize closer to 100% of many current and most future machines. Dual-core and SMP is here to stay
So threading is good.
I don't know what people mean when they say you cannot use DirectX inside threads. Does it mean that only the main thread (the process if you will) can do DX calls? Or does it mean that DirectX commands shouldn't be made by more than one thread as DX calls are not thread-safe? If the latter, then there is no problem, because you only have one thread doing the rendering. (Not sure about the input thread though. Maybe that has something to do with DX also).
The idea of clearly splitting the game engine from the rendering and having them run asynchronous is very good. This will allow you to render fewer times on a slow computer while the physics (game engine) are not affected. This is important. It lets the game run on slow computers at the same speed (game time) as on fast computers with the only difference (hopefully) being frame-rate.
Be aware though, there is a slight overhead in using threads on single-core single CPU systems as the operating system has to switch between the threads (context switch). Ohter then that I think it is an excellent idea.
I don't know what people mean when they say you cannot use DirectX inside threads. Does it mean that only the main thread (the process if you will) can do DX calls? Or does it mean that DirectX commands shouldn't be made by more than one thread as DX calls are not thread-safe? If the latter, then there is no problem, because you only have one thread doing the rendering. (Not sure about the input thread though. Maybe that has something to do with DX also).
The idea of clearly splitting the game engine from the rendering and having them run asynchronous is very good. This will allow you to render fewer times on a slow computer while the physics (game engine) are not affected. This is important. It lets the game run on slow computers at the same speed (game time) as on fast computers with the only difference (hopefully) being frame-rate.
Be aware though, there is a slight overhead in using threads on single-core single CPU systems as the operating system has to switch between the threads (context switch). Ohter then that I think it is an excellent idea.
The most likely way for the world to be destroyed, most experts agree, is by accident. That's where we come in; we're computer professionals. We cause accidents. (Nathaniel Borenstein)
http://www.wirednerd.com
http://www.wirednerd.com
Are you sure? I mean that sounds almost as "crappy" as the limitations with SDL's event-loop. Only the main thread can process events.Trond wrote:No, you can only render from the main thread.
Anyways, if it is the main thread that must make DX calls, then so be it, you just have to make sure that be the case. So start a thread for the engine and inputs, then enter the rendering routine.
The most likely way for the world to be destroyed, most experts agree, is by accident. That's where we come in; we're computer professionals. We cause accidents. (Nathaniel Borenstein)
http://www.wirednerd.com
http://www.wirednerd.com
http://www.purearea.net/pb/english/manu ... index.htmlu9 wrote:Are you sure?Trond wrote:No, you can only render from the main thread.
Again, thanks everyone. This discussion is really helping me.
Just on threads.
My understanding is that each process has one or more threads, and that each thread has its own priority, timeslice, registers and stack (and an OS "management" area) but shares memory and etc with the parent thread/process.
I also thought that adding new threads gave some additional time in the overall scheme of things (re other processes) without too much of a penalty hit on the original process/thread.
Wading through the SDK now I can't see where I got that idea, or find anything on how timeslices are allocated.
So, simplistically (ignoring priority, etc) a program/process has 1/NumberOfThreads of CPU time. Say 10 processes were running, each would have 1/10 of the available time.
Now, when a new thread is created, does everything get 1/11 (so a small gain to the creator) or is the creator penalised, and both it and the thread get 1/20 of the time whilst the other 9 processes still have 1/10? Or is it some other adjustment?
Hmmm. As u9 mentioned, maybe I can do all the graphics related stuff (SR) in the parent and make a thread the UI bit. Actually, in some ways this would be better/easier.
Anyhow, thanks again. Really appreciated.
Just on threads.
My understanding is that each process has one or more threads, and that each thread has its own priority, timeslice, registers and stack (and an OS "management" area) but shares memory and etc with the parent thread/process.
I also thought that adding new threads gave some additional time in the overall scheme of things (re other processes) without too much of a penalty hit on the original process/thread.
Wading through the SDK now I can't see where I got that idea, or find anything on how timeslices are allocated.
So, simplistically (ignoring priority, etc) a program/process has 1/NumberOfThreads of CPU time. Say 10 processes were running, each would have 1/10 of the available time.
Now, when a new thread is created, does everything get 1/11 (so a small gain to the creator) or is the creator penalised, and both it and the thread get 1/20 of the time whilst the other 9 processes still have 1/10? Or is it some other adjustment?
Hmmm. As u9 mentioned, maybe I can do all the graphics related stuff (SR) in the parent and make a thread the UI bit. Actually, in some ways this would be better/easier.
Anyhow, thanks again. Really appreciated.
Dare2 cut down to size
You don't get more time-slices by creating more threads (I think, I can't really recall... well maybe you could try that out). But remember that any computer with a dual-core will give you maybe something in excess of 90% more computational power if the work is well distributed between the threads. I say go for it 
The most likely way for the world to be destroyed, most experts agree, is by accident. That's where we come in; we're computer professionals. We cause accidents. (Nathaniel Borenstein)
http://www.wirednerd.com
http://www.wirednerd.com


