Core parts separation

I’ve got an idea to separate some core parts to independent process with will links in some way (ie sockets).

In this way core becomes more stable, pretty much easier in debugging and easier to modify.

It isn’t so difficult but I need some info:

[ol][li][*]Which way to use for processes linking. As we can see now it’s implemented though sockets: auth gets connections, validate it and sends to world.[/li]
[li]How much parts should be exists and what whey should include. Ie separate it on maps: 1st server will takes all classic content, 2nd - TBC and so on.[/li]

[/ol]

So post your good ideas about the subject here.

http://us.battle.net/wow/en/forum/topic/2065776684

This is not a new idea, better coders than any of us have attempted to properly split up the core, and failed…

if you want to separate parts, start from 0…

If someone does this, I hope they take the time to move the console commands to the realm server (or one of the servers that tie all the realms together, such as chat)

Thanks for the suggestion.

You don’t need to start from 0, at the beginning you can just develop your cluster by commenting your code. Then when your cluster is working, you can clean the code.

You need to split your core in at least 3 differents parts to develop a unique cluster for each realm, if you want to share Arena and Battleground and more generally maps, you need to split in at least 4 parts if I remember, maybe 5 if you want to do it properly. Anyway if your structure of cluster is right, you will see that a large amount of the handlers will be easy to split. Because the WorldOfWarcraft functions ( I mean the dungeonfinder, the chat, the battlegrounds etc ) are all designed for a cluster-based core. Except the group system which is less prepare for that.

One things that you need to do when you clusterise your core is to do a systeme that share GUIDs, one system I have done is that each type of guid need to be split in range.

For example, you have to nodes : A & B.

A will take the range of GUIDs 0:5000

B will take the range of GUIDs 5001:10000

For the moment it’s right, but you need a controller to say to each nodes which range it will take. So let’s say we have a controller C.

A & B start

C start

C Send the information of the range 0:5000 to A

C Send the information of the range 5001:10000 to B

A arrives to 5000 so it needs to change of range, it asks to C which one now, let’s say 10001:15000

C send the range 10001:15000 to A

Here is a problem. If you want a cluster that can be split into different machines, you need to consider the ping for an asynchronous system. That is why, each nodes need 2 ranges of guids.

Because Now :

A & B start

C start

C Send the information of the range 0:5000 to A and the second range 10001:15000

C Send the information of the range 5001:10000 to B and 15001:20000

A arrives to 5000 so it needs to change of range :

A switch to range 10001:15000

A asks for a new range

C answers range 20001:25000 to A

Now it’s correct, the ping won’t be a problem if your range is large enough. And the change of range will be asynchronous, which I think is better.

This system is just needed for the characters and maybe Pets, it depends on your architecture/structure of cluster.

GUID-Problem solved by setting up a guid-range, the syncs are not really necessary only if one node crashes. The last guid would be stored at logonserver, if the node is up again it would get it’s last guid and range again from the logonserver alias master.

ATM the system is running using two nodes to build a main-world and a sub-world.

The only weakness would be: you have to rebase the guids every 7-30 days.

But it takes long: about 10-120 Minutes. 4Gb would be rebased in 50 Minutes.

auth_server auth_server (as many as you want)

|

|

proxy_server

/

/

/

world world

With this setup, you would still have a single point of failure at the proxy server, but using normal client opcodes to communicate between the proxy and the other world servers would be the easiest way. That would be a start. From there you could break out chat, arenas, BGs, instanced / non-instanced maps. Theoretically with this setup, you could have Players continuing to play in say ICC even if the world process that handles map570 crashed. Now, the only thing I can’t figure out how to handle gracefully is if a player tried to leave ICC and would be teleported back to map 570. Without some way to prevent the player from leaving the instance (and this would require sniffs from offy WHILE one of their map servers was crashed), then you are going to crash the proxy and probably the map server that is hosting ICC.

A further progression would then be multiple proxy servers that communicate with a proprietary protocol that we come up with to stay in sync. As bloody_76 said, you get the cluster talking and then you could clean out the code to have:

authd

proxyd

chatd

mapd

areanad

bgd

instead of having all that extra code in each worldd process…

– Brian

EDIT: Oh well, the formatting came out crappy…

Might as well just continue with Encore, it was being built as a “cluster”

https://github.com/Trinity-Encore/Encore/tree/master/Trinity.Encore.Services

I still stick to what I said WAY back then: A - C# sucks, B - There is a lot of good code that there is no reason to rewrite, C - C# sucks

– Brian

Hater.

We are all giving suggestion here, no need to insult.

LMAO! I was not aware that a non-sentient programming language could be insulted /emoticons/default_tongue.png

BUT – back on topic, there are already some good protocols out there that could be used to keep the proxy servers in sync … MQ is the first that comes to mind: http://en.wikipedia.org/wiki/IBM_WebSphere_MQ

– Brian

[CENTER]AUTH_SERVER

[/CENTER]

[CENTER]|

[/CENTER]

[CENTER]LOGONSERVER

[/CENTER]

[CENTER]| |

[/CENTER]

[CENTER]/ /

[/CENTER]

[CENTER]WORLD WORLD

[/CENTER]

This is the Setup i’m using atm. with the trinitycore /emoticons/default_smile.png

And it works, but you should be sure that you have a master-core online to handle the CharacterScreen and some syncs.

The proto to exchange data between LOGONSERVER and WORLDSERVER is the same like exchange datas between CLIENT and LOGON, except the packet-size definition in the header… I’d used a fixed size á 4Bytes, unlike TC there’s a dynamic handling 2Bytes normal size 3 Bytes big packets. I guess uint32 is enough space to send/receive compressed datas.

The LOGONSERVER is like a small proxy the only different is: this proxy could be handle ingame-commands, but not like the Core with . or ! here we’ve got a restricted channel and only “player” with gm-level 1 or greater are able to use these commands.

There are some changes more, but these are the basics.

For about 5 years now I have been messing with Emulators Me being in the Army during most of my deployments there is not internet connection or the connection is to slow for WOW. I have gone through many different types trying to find the best one with everything working. Always there is something wrong with them ether the character spells or NPC spells don’t work or dungeons and raids or in complete. I know in this business nothing is perfect. One thing all the emu’s have in common is the Core. The problem with that is you make one little change here it can effect something there.

The Idea I have is to think outside the box and make a more modular Core i.e. Raids and Dungeons are handled separately in different files all together. The biggest improvement I see with this is updating. Instead of updating the entire core you can make changes to sub cores without effecting the whole operation. Also as a bonus (maybe in time) you could run the sub cores on different computers all together. This will in fact reduce server workload.

Main Core Sub Core Modules


World Arena core

                                                    Dungeons Core

                                                    Raids Core

                                                    PVP Battle Ground Core

Now I’m not a master at writing code hell I can’t do it at all. This was nothing more than an idea. Please tell me what you think.

It would be nice if the world could be divided into 4 separate cores (Kal, EK, NR & Out), each running on a separate server and connecting to the database on a 5th server. Running multiple cores to the db seems straightforward, but how to communicate to the client how to handoff to another IP when changing servers seems like the biggest issue.

This is already how it works… except instead of being in separate cores ( which is impossible. Like how would you technically separate dungeons, raids, areana, and BGs? They are basically the same thing: maps with gameobjects, NPCs and players on them. ) they are in separate classes. Encapsulation and information hiding are two ideas behind Object Oriented Programming.

It’s not as hard as you think. In fact it would just take time to do so.

However a question arises. What for? Do you plan on hosting a server for tens of thousands of players?

If so why not distribute the workload even better and break up the services to different servers?

Like AI server for AI, Chat server for chat, etc.

I don’t think that’s what he meant.

Don’t think you are the first to come up with this idea, it has been around for many many years, and some have tried to do it (it’s called clustering, by the way) but, none have gotten it well enough to release it…

I’m not, but thar’s exactly why you would do it. Scalability is a great academic exercise