Heey,
I am about to make a program that does about 100.000 read/write actions a sec to a database.
My question:
Can a database handle this amount of connections in such a short time?
specs:
2.4 Ghz quad core
4 Gb ddr2 ram
Greetings,
Niels
Heey,
I am about to make a program that does about 100.000 read/write actions a sec to a database.
My question:
Can a database handle this amount of connections in such a short time?
specs:
2.4 Ghz quad core
4 Gb ddr2 ram
Greetings,
Niels
This isn't really a C question, and I'm certainly not a database expert, but I can tell you that your answer depends on a lot of things, including:
OS and whatever other services may be running besides your database
What database you're using
Query complexity
Is this a "burst" or "sustained" rate? I can't imagine you're asking such a question here if you are really doing 100.000 queries/second all day, every day. Bursts of queries will be processed eventually, hopefully before the next burst.
Can you use a single/persistent connection? Make all queries on that one connection since creating/destroying connections is reasonably expensive.
Will your reads and writes be interspersed?
Will your writes be updates of existing records or additions of new records?
What kind of locks will the writes require?
How many columns and tables are involved?
What key(s) do you have defined on the table(s)?
Are there any complex joins?
Can you parallelize any queries, or do they need to happen in a specific order?
...
Just try it out and see how the system performs.
It is an algoritme that interacts on the users behavior.
Every time the user does something it analyses that, this wil create about 10.000+ strings/functions.
All those 10.000+ strings must be added to the database if they don't exist yet.
It also counts how often such string is made.
It then analyses something else that also gives about 10.000+ values.
all those values need to be added to the database, their unique id's must me added to all the first made strings.
The size of the database would probably be a few gigs
No need to connect to other databases, just one connection.
//edit
About just trying it out.... I first want to know if it is possible before making that huge algoritme xD
the algoritme I described, is nog everything, that are just the database write/read actions, it also needs to render al the strings/values to put in the db and that is quite some code I can tell you:P
Last edited by nielskool; 01-03-2011 at 05:29 PM.
Again, this is not really the best forum for such a question...
Your question is far too vague to answer. I would need fairly intimate knowledge of your system, beyond just proc and RAM, including whatever algorithms you're using (their time complexity), how the user is interacting, whether it's over a network, etc. Even then, I could only wager a very inaccurate guess at best.
This sounds more like a case of pre-optimization (which is the root of all evil), or just worrying about maybe having to pre-optimize. Maybe the database won't even be the problem. Maybe you will spend the bulk of your time waiting for user input (which is common). Perhaps the complicated string rendering/generation algorithms you're using will be the bottleneck. Who knows?
Presumably you have to solve this problem (i.e. it's for work, school, etc), and thus you need to write the code. So write it flexibly so that you can change your underlying storage engine without having to change too much of your code.
Now that that's out of the way, I would reckon that computer could hold up, at least if a few things were true:
1. It wasn't bogged down by too many other processes
2. You had well-designed databases and optimized queries
3. You had periodic breaks in the 100.000+ queries generated from some user input, i.e. the user stopped typing/clicking from time to time.
Maybe it could hold up under only two of those conditions, or one, or maybe even zero. There just isn't enough information to say for sure. Once again, try it out.