i have now rewritten my board constructor (and move applier) to minimize a lot of branching and be generally more efficient. Here is the new gprof data -
now the constructor is much more efficient (a little bit more than 2.5 times as fast).Code:Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls s/call s/call name 49.43 1.29 1.29 7990387 0.00 0.00 Board::canCapture(unsigned long long) 20.31 1.82 0.53 281725 0.00 0.00 Board::generateLazyMoves() 10.73 2.10 0.28 80 0.00 0.03 MiniMax(Board&, int, int, int, int&) 7.66 2.30 0.20 7990409 0.00 0.00 Board::Board(Board&, Move&) 5.36 2.44 0.14 1752236 0.00 0.00 std::vector<Move, std::allocator<Move> >::_M_insert_aux(__gnu_cxx::__normal_iterator<Move*, std::vector<Move, std::allocator<Move> > >, Move const&) 2.68 2.51 0.07 1700863 0.00 0.00 std::vector<Board, std::allocator<Board> >::_M_insert_aux(__gnu_cxx::__normal_iterator<Board*, std::vector<Board, std::allocator<Board> > >, Board const&) 2.30 2.57 0.06 281606 0.00 0.00 Board::validateMoves_rtnBoard(std::vector<Move, std::allocator<Move> >*) 0.77 2.59 0.02 18210 0.00 0.00 Board::underAtk(unsigned long long) 0.77 2.61 0.02 Move::DEBUG_printMove() 0.00 2.61 0.00 131 0.00 0.00 beginsWith(char*, std::string&) 0.00 2.61 0.00 119 0.00 0.00 Board::generateMoves() 0.00 2.61 0.00 119 0.00 0.00 Board::validateMoves(std::vector<Move, std::allocator<Move> >*) 0.00 2.61 0.00 117 0.00 0.00 Board::getGameState() 0.00 2.61 0.00 8 0.00 0.00 std::vector<int, std::allocator<int> >::_M_insert_aux(__gnu_cxx::__normal_iterator<int*, std::vector<int, std::allocator<int> > >, int const&) 0.00 2.61 0.00 2 0.00 0.00 checkClaimDraw(std::vector<Board, std::allocator<Board> >&, Move&, bool) 0.00 2.61 0.00 1 0.00 0.00 global constructors keyed to kingAtk 0.00 2.61 0.00 1 0.00 0.00 algeToMove(std::string, Board*, signed char) 0.00 2.61 0.00 1 0.00 0.00 moveToAlge(Move) 0.00 2.61 0.00 1 0.00 0.00 computeConst() 0.00 2.61 0.00 1 0.00 2.59 findBestMove(Board, int&, unsigned long, int, bool) 0.00 2.61 0.00 1 0.00 0.00 __static_initialization_and_destruction_0(int, int) 0.00 2.61 0.00 1 0.00 0.00 char* std::string::_S_construct<char*>(char*, char*, std::allocator<char> const&, std::forward_iterator_tag)
the number of nodes evaluated per second now is ~43000, which roughly translates to a 4 ply search both with and without quiescent extension at the beginning in 15 seconds.
Does that number seems normal for basic minimax/alpha-beta with random ordering? I have no prior experience in chess programming so I don't know what is normal. I don't want to move on to implement other things like the transposition table and null move heuristics and then later realize that there is something fundamentally wrong in my program.
Thank you very much for all your help
I write a chess program in alpha beta with c++ that go to depth 7 easily
at 2 second and the positions are completly complex
now I am trying to go to deep 8 by alpha beta
but it take so long time.
by system that os=windows xp & cpu=E2180(2G) ram=2G
It takes 105 second to go deep 8 in complex chess position's
but in usual positions it takes 30 second to go deep 8.
I just want tell that the program that you write absoltly is not optimized
I am sure that your program did not go to more than 6 plies depth
by alpha beta
I am sure it does.
Although it's been a couple of years since you wrapped up your chess program, can you recall what was the main issue with your shallow search depth?
I'm writing a program for a different 2-player combinatorial game and getting depth 5 with minimax, alpha-beta pruning, random move order, in about 8 secs. This game has roughly 40-80 legal moves per ply, so something around 200,000 positions per sec are evaluated with a fairly complex evaluator (the evaluator is 60% of running time on a profiler).
This is in interpreted Java as well.
I'm guessing move ordering will be the next most-important optimization.
With that high a branching factor (is that real branching factor or effective branching factor?), 5 plies in 8 seconds sounds pretty good.
Move ordering is VERY important. With bad move ordering, alpha-beta won't perform much better than minimax. With perfect ordering, it will search twice as deep (square roots the branching factor). But of course, if you know the perfect ordering already, why would you need the search?
So in reality it would be something in between. You'll probably get something like 1.5x the plies you get with just minimax, if you have pretty good move ordering.
After that, at least for chess, it's mostly heuristics ("guessing" that a move will not be relevant and reduce its depth or kill it completely). There are many heuristics that can be applied, but I'm not sure if they apply to your game. Eg. in chess, if you move twice in a row, and still gets a beta cut-off (from a shallow search), we assume the move is more or less useless. Or, if you have good move ordering, you can search moves closer to the front of the list (more likely good moves) to greater depth than rest of the moves.
Basically boils down to making the tree uneven. Search less likely (to be important) nodes to shallower depth, and use the time saved to search the more promising nodes to greater depth.
Then the ply count is not so important anymore.
If you prune too much, you can get very high ply counts, but that doesn't mean anything, and can be very dangerous.
On the other hand, if you extend too much, you'll get low ply counts, but it can potentially play better.
My 5 plies in 8 secs is an average. With my current (poor) move ordering, I can search up to 6 million nodes to get 5 ply, and that can take as long as 30 secs.
I just read Francois-Dominic Laramee's excellent tutorial on chess programming over on GameDev.net and it's motivated me to add all the basic bells and whistles used by chess programs: history table, transposition table, iterative deepening, etc. I wish I had a way to make evaluation faster, however. It's killing me.Move ordering is VERY important. With bad move ordering, alpha-beta won't perform much better than minimax. With perfect ordering, it will search twice as deep (square roots the branching factor). But of course, if you know the perfect ordering already, why would you need the search?
Basically to even check for a win you have to find all paths in a sparse graph. This problem isn't even in NP. But there may be tricks I haven't thought of yet...
Yeah, the nice thing about chess is that it's well-understood. I have good intuition (I've been rated as high as 2450 blitz, 2280 USCF... 20 years ago). Opening books are all over.After that, at least for chess, it's mostly heuristics ("guessing" that a move will not be relevant and reduce its depth or kill it completely).
With this new game, I don't even know what a "good" position is. It's all guess-work. I may have to use some machine-learning techniques (many Othello programs do this to develop their opening books).
Since my game often ends in less than 20-24 ply, if I could get to (say) 8-10 ply I'd imagine the game would be almost unbeatable.Then the ply count is not so important anymore.
Anyway, thanks again for the quick and informative reply. I've just stumbled upon a wealth of resource online, so I'm going to go feed this obsession for a few days.
Btw, I'm a professor of computer science and I'll be assigning this as homework next semester. I won't expect 2nd yr C.S. students to do much beyond simple minimax with alpha-beta however. Even that is going to be very hard for them I fear.
Effective branching factor is just a misnomer some people use for log(number_of_nodes_searched) / log(ply_reached). For good chess programs, it's usually 2-3, after all kinds of reductions. Meaning they search very deep and narrow.Thanks for the reply cyberfish! I'm not sure what the difference is between real branching factor and effective. There are about 40-80 legal moves in a given early-game position. This goes down a little until move 10, after which there are 200-400 legal moves. Often games end before then, however.
Bad programs can't afford to do that because they have bad heuristics that will make them miss important things.
If the branching factor is that high, maybe minimax is not the best way? Another game with high branching factor is Go, and apparently minimax is pretty useless there, because it would only be able to search 2-3 plies, and will be beaten by even beginners.
Well, I wrote most of my chess program when I was in high school, without any formal training in computer science (or even programming), so I would think it's possible if students are motivated enough . It's not incredibly strong (~2300 blitz), but that's probably due to my almost non-existent chess knowledge (I am about 1500).Btw, I'm a professor of computer science and I'll be assigning this as homework next semester. I won't expect 2nd yr C.S. students to do much beyond simple minimax with alpha-beta however. Even that is going to be very hard for them I fear.