Server uses all CPU time


photo

Recommended Posts

Was running the core network demos on a seperate machine from the client, when I noticed that the server, even in headless mode is eating all CPU time. Specifically the network viewer demo (small world or large world it made no difference), and synchronization 2. Adjusting IFps on noth engine.game, and engine.physics appears to have had no effect on cpu useage. When a client connects they get all the updates just fine, and dosen't seem to suffer any problems.

 

Any idea as to what I can do bring the server back under control? It seems like something is still causing updates more often then they should be/need to be.

Link to post

In synchronization 2 sample the server constantly creates new objects (up to 1000) and updates them every 30 ms. It is quite intensive job.

Link to post



It creates 10 objects per second, one every 100ms seconds. I can switch it to object reuse, rather then creating and deleting them, but I'm willing to bet it's still takes all the cpu resources. It's worth noting that it eats all cpu time, even when no clients are connected, which means that it doesn't have to iterate through the list of which objects to send to clients. It feels like something isn't sleeping that should be. 1000 objects is not a problem on physics, on a much weaker machine. Physics runs on the demo at the less then a 1000 mark are taking less then 5ms (max, avg is around 2ms).

 

On a weaker machine (that has graphics) things are actually severely graphics limited, but physics is still finishing ridiculously fast. If I can get 90fps on a weaker machine (yes obviously physics is capped earlier by default) on low graphics settings, then if I disable graphics altogether, there should be lots of idle time, as now we need only run at the physics cap rate, and call update() at the same rate, for anything else, not to mention the spare time gained by just not rendering. But when I actually do disable physics it still takes all the CPU time, which is the issue.

 

Changing the network update rate out to even 1000 ms has no effect, on cpu usage, nor does varying engine.game.setIFps(), or engine.physics.setIFps(). Physics objects are freezable. But again, given the above performance numbers, physics is not the issue here, but rather that it does not seem to be pausing the correct amount of time between updates.

 

 

EDIT: Forgot to mention, that it is all the demos, not just synchronization_02 that does this.

Link to post

You can use world_analyze console command to see which functions in the script consume the most.

In synchronization 2 demo it's SharedObject::updateMeshTransform(). This function updates current transform for 1000 objects every network tick. To let the network part of the object know if it needs to send changes over the network or not (if yes it sends, if not it does not). And this is done even if we have no clients connected.

 

Another issue about 100% CPU usage is described here:

http://www.jenkinsso...php?topic=643.0

http://www.jenkinsso...hp?topic=1099.0

 

You can also try running RakNet samples and see what happens.

 

Network plugin sources are available in binary and source SDK, so you can try to tweak sleep time between network ticks and thread priority for RakPeer.

Link to post

The bigger question is if Unigine eats up 100% of CPU time running with render disabled, without any networking going on. To my understanding, there is no networking going on when testing and the server doesn't do anything. Yet, 100% CPU is taken. We need to verify that.

Link to post

The bigger question is if Unigine eats up 100% of CPU time running with render disabled, without any networking going on. To my understanding, there is no networking going on when testing and the server doesn't do anything. Yet, 100% CPU is taken. We need to verify that.

 

I think I read a while ago that rakNet is doing this for better real time management. There is a switch in the rakNet API to turn this off(?).

But a quick google didn't find any result. Maybe it was stated in the unigine documentation.

 

But the 100% cpu thing is since we use unigine with network the same.

Link to post

In other words unless Unigine team makes dedicated server option we are doomed with 100% CPU usage when running Unigine without renderer?

Link to post

I haven't seen any 'sleep' or 'spin wait' routines in main update() cycle in Unigine source, so I suppose that it is designed to consume all existing CPU resoures when possible. However it is not a problem for fullscreen single-player games and I'm sure it would be fixed for dedicated server scenario. You'd just wait for a year or two untll Unigine guys make their own network FPS game, as we are waiting for outdoor nature-scened game ;)

Link to post

well, I mean, Unigine has cool visuals and all, but the reality is as such as modern world needs robust multiplayer. There are only handful of successful games that are singleplayer only. Low lag scalable multiplayer, support for DLC, support for in-game purchases, support for expansion pack, etc. is what the engine needs to succeed on the market.

 

Sure, listen server will consume 100% of CPU time as it renders the game. But if render is disabled, what is there to consume CPU time ?! Bad design practices ?

 

Anyhow, we are looking into that right now.

Link to post

We did. However we are running Linux server. What happens on Windows is a known issue and it affects not only Unigine but also Darkplaces for example.

 

Eating up 100% CPU on Linux is inexcusable. Shell accounts get shut off for that.

 

The ultimate issue is that when we have only 1 object updating and not renderer running, CPU usage on Linux should never be 100%. What happens is that Unigine reports fps increase (what for, there is no rendering happens) but still uses 100% of CPU. So this isn't RakNet issue per se, but Unigine's issue.

 

If we run Unigine with no renderer and no network, bare bones, why on Earth would it load CPU 100% ?

 

EDIT: If users had dedicated machine to run dedicated server, I would care less. The reality however as such as people run dedicated server in one screen, and then run the game in another screen (by screen I mean Linux terminal screens). So with the things the way they are right now, the user who runs dedicated server and wants to play with his buddy suffers horrible fps drop. On Windows, it's solved by not having dedicated server. Windows users rarely know how to run one anyway. So people run listen server within the game instance of the game. However, Unigine is positioned on the market as Linux engine too (and I have to release game on Linux per my agreement). Linux users do not tolerate 100% CPU load for a game that runs as dedicated server with rendering disabled (nor average users use Windows based hosting services).

Link to post

With regards to that post, raknet is noticeably interfering with the OS, and inducing over all lag in the system. With the raknet module, I've increased the sleep()'s from 0 (which is essentially, release the cpu, and attempt to get in the scheduler as fast as possible, to up to 30ms's. This has no effect on the CPU useage, but noticeably Unigine is now spitting out on the console that it's getting 1000+fps, rather then the 300 fps it was getting before. Clearly 300 to 1000 fps is excessive, ridiculous, and unneeded. Now the impression I have is that something is forcing a refresh to often, but I'm not sure what's doing that. If you could tell me how to lock whatever tick that is, I expect the useage to plummet nicely.

Link to post

I have no clue if this works with null renderer or provides any benefit for your problems, but its possible to limit Unigine frame rate via engine.game.setFTime(). See this post.Might be a way to reduce racknet update frequency

Link to post

If we run Unigine with no renderer and no network, bare bones, why on Earth would it load CPU 100% ?

It's the update cycle - it will be performed as often as possible and consume available CPU resources. If you do not need such behaviour, use sleep() in main.cpp.

Link to post
  • 2 weeks later...