Heads Up Poker Game

broken image


After a three-month battle that, for some, was never in doubt, the 'Heads Up Challenge' between Poker Hall of Famer Daniel Negreanu and online legend Doug Polk has come to an end. More than 25,000 hands of poker were played and, in the end, both players were complimentary of the other. Compliments aren't the scoreboard, however, and that read out that Polk had won the Challenge, finishing with a $1.2 million victory in the event.

Wednesday Action Closes Out the Fight

Check world poker news updates, editorial columns, new video tutorials, latest forums threads, poker sites reviews, and coming up poker coaching sessions. Heads up poker is a private face-to-face game. This stage starts when two players remain at the poker table. The ability to play heads up poker is the greatest professional qualities of the player because the ability 'to read' the player at this stage is very significant.

When the duo came to the virtual felt on WSOP.com on Wednesday afternoon, they knew that this was probably the last day of the battle. With two tables of $200/$400 action going and a little more than 1000 hands left on the clock, the end was nigh with Polk holding a safe edge of over $900,000. The question wasn't whether Polk was going to win the challenge, it was whether he would break the million mark in doing so.

The opening salvos went in the favor of Negreanu. In a key hand, Polk would river a straight, but Negreanu caught a flush with the same river card to scoop up a decent pot of nearly $40,000. The good fortune would continue for Negreanu as he won by making hands (a full house for over $25,000) and by playing some power poker (a jam on a three-spade, double paired board). Over the first few hours of action, Negreanu was able to chop around $150,000 off the Polk edge.

As typical of the action throughout the Challenge, Polk was able to respond quickly. He was the beneficiary of a four-flush, holding the Ace when Negreanu held the King, and Polk got a key double up in a pot of $180,000. That, along with some other action through the day, would see Polk end up with a $255,722 edge over the final 1718 hands and close out the Challenge.

Newfound Respect Between the Players

After an acrimonious start between the twosome (and some tense moments through the Challenge), the close came almost as a relief to the players. Over his Twitter account, Polk recounted the final score and, in an understatement, simply said 'We won, guys. We did it.' Negreanu was also complimentary of the play of Polk, saying over the GGPoker stream, 'He played well, no question about that.'

Poker

The two players commiserated over Twitter prior to the close of the Challenge and, it seems, they are willing to discuss the overall event in more depth:

That discussion would be VERY worthwhile.

In looking back at the Challenge, Negreanu obviously made a few mistakes. First off, he probably should have gotten more of the action to be played in a live setting, which is much more his game. Even after a year of retirement, Polk's game took little time to get into shape and to have nearly all the play at WSOP.com (the first 200 hands were played live on PokerGO) was definitely an advantage for Polk.

Taking on a Heads-Up master like Polk was also an error in judgment for Negreanu. Heads Up poker has never been the Hall of Famer's forte, whereas it was the very subject that Polk mastered when he was one of the dominant players in the online game. But it is part of poker that someone believes that they've got a chance in any game, so you must give kudos to Negreanu for taking the shot.
Where do the two men go from here? Polk may just drift back to retirement after backing the truck up and Negreanu will probably be playing poker when he's 80 (and, like Doyle Brunson, still be playing at a high level). But the two men have given the poker world an entertaining three months of action and, for that, we've got to show our appreciation.

Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker

Links

Best scratch off apps to win real money. Twitch | YouTube | Twitter
Downloads & Videos | Media Contact

DeepStack bridges the gap between AI techniques for games of perfect information—like checkers, chess and Go—with ones for imperfect information games–like poker–to reason while it plays using 'intuition' honed through deep learning to reassess its strategy with each decision.

With a study completed in December 2016 and published in Science in March 2017, DeepStack became the first AI capable of beating professional poker players at heads-up no-limit Texas hold'em poker.

DeepStack computes a strategy based on the current state of the game for only the remainder of the hand, not maintaining one for the full game, which leads to lower overall exploitability.

DeepStack avoids reasoning about the full remaining game by substituting computation beyond a certain depth with a fast-approximate estimate. Automatically trained with deep learning, DeepStack's 'intuition' gives a gut feeling of the value of holding any cards in any situation.

DeepStack considers a reduced number of actions, allowing it to play at conventional human speeds. The system re-solves games in under five seconds using a simple gaming laptop with an Nvidia GPU.

Heads Up Holdem Table Game

The first computer program to outplay human professionals at heads-up no-limit Hold'em poker

In a study completed December 2016 and involving 44,000 hands of poker, DeepStack defeated 11 professional poker players with only one outside the margin of statistical significance. Over all games played, DeepStack won 49 big blinds/100 (always folding would only lose 75 bb/100), over four standard deviations from zero, making it the first computer program to beat professional poker players in heads-up no-limit Texas hold'em poker.

Games are serious business

Don't let the name fool you, 'games' of imperfect information provide a general mathematical model that describes how decision-makers interact. AI research has a long history of using parlour games to study these models, but attention has been focused primarily on perfect information games, like checkers, chess or go. Poker is the quintessential game of imperfect information, where you and your opponent hold information that each other doesn't have (your cards).

Until now, competitive AI approaches in imperfect information games have typically reasoned about the entire game, producing a complete strategy prior to play. However, to make this approach feasible in heads-up no-limit Texas hold'em—a game with vastly more unique situations than there are atoms in the universe—a simplified abstraction of the game is often needed.

A fundamentally different approach

DeepStack is the first theoretically sound application of heuristic search methods—which have been famously successful in games like checkers, chess, and Go—to imperfect information games.

At the heart of DeepStack is continual re-solving, a sound local strategy computation that only considers situations as they arise during play. This lets DeepStack avoid computing a complete strategy in advance, skirting the need for explicit abstraction.

During re-solving, DeepStack doesn't need to reason about the entire remainder of the game because it substitutes computation beyond a certain depth with a fast approximate estimate, DeepStack's 'intuition' – a gut feeling of the value of holding any possible private cards in any possible poker situation.

Finally, DeepStack's intuition, much like human intuition, needs to be trained. We train it with deep learning using examples generated from random poker situations.

DeepStack is theoretically sound, produces strategies substantially more difficult to exploit than abstraction-based techniques and defeats professional poker players at heads-up no-limit poker with statistical significance.

Heads Up Poker Game

Download

Paper & Supplements

Hand Histories

Free Heads Up Poker Game

Game

The two players commiserated over Twitter prior to the close of the Challenge and, it seems, they are willing to discuss the overall event in more depth:

That discussion would be VERY worthwhile.

In looking back at the Challenge, Negreanu obviously made a few mistakes. First off, he probably should have gotten more of the action to be played in a live setting, which is much more his game. Even after a year of retirement, Polk's game took little time to get into shape and to have nearly all the play at WSOP.com (the first 200 hands were played live on PokerGO) was definitely an advantage for Polk.

Taking on a Heads-Up master like Polk was also an error in judgment for Negreanu. Heads Up poker has never been the Hall of Famer's forte, whereas it was the very subject that Polk mastered when he was one of the dominant players in the online game. But it is part of poker that someone believes that they've got a chance in any game, so you must give kudos to Negreanu for taking the shot.
Where do the two men go from here? Polk may just drift back to retirement after backing the truck up and Negreanu will probably be playing poker when he's 80 (and, like Doyle Brunson, still be playing at a high level). But the two men have given the poker world an entertaining three months of action and, for that, we've got to show our appreciation.

Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker

Links

Best scratch off apps to win real money. Twitch | YouTube | Twitter
Downloads & Videos | Media Contact

DeepStack bridges the gap between AI techniques for games of perfect information—like checkers, chess and Go—with ones for imperfect information games–like poker–to reason while it plays using 'intuition' honed through deep learning to reassess its strategy with each decision.

With a study completed in December 2016 and published in Science in March 2017, DeepStack became the first AI capable of beating professional poker players at heads-up no-limit Texas hold'em poker.

DeepStack computes a strategy based on the current state of the game for only the remainder of the hand, not maintaining one for the full game, which leads to lower overall exploitability.

DeepStack avoids reasoning about the full remaining game by substituting computation beyond a certain depth with a fast-approximate estimate. Automatically trained with deep learning, DeepStack's 'intuition' gives a gut feeling of the value of holding any cards in any situation.

DeepStack considers a reduced number of actions, allowing it to play at conventional human speeds. The system re-solves games in under five seconds using a simple gaming laptop with an Nvidia GPU.

Heads Up Holdem Table Game

The first computer program to outplay human professionals at heads-up no-limit Hold'em poker

In a study completed December 2016 and involving 44,000 hands of poker, DeepStack defeated 11 professional poker players with only one outside the margin of statistical significance. Over all games played, DeepStack won 49 big blinds/100 (always folding would only lose 75 bb/100), over four standard deviations from zero, making it the first computer program to beat professional poker players in heads-up no-limit Texas hold'em poker.

Games are serious business

Don't let the name fool you, 'games' of imperfect information provide a general mathematical model that describes how decision-makers interact. AI research has a long history of using parlour games to study these models, but attention has been focused primarily on perfect information games, like checkers, chess or go. Poker is the quintessential game of imperfect information, where you and your opponent hold information that each other doesn't have (your cards).

Until now, competitive AI approaches in imperfect information games have typically reasoned about the entire game, producing a complete strategy prior to play. However, to make this approach feasible in heads-up no-limit Texas hold'em—a game with vastly more unique situations than there are atoms in the universe—a simplified abstraction of the game is often needed.

A fundamentally different approach

DeepStack is the first theoretically sound application of heuristic search methods—which have been famously successful in games like checkers, chess, and Go—to imperfect information games.

At the heart of DeepStack is continual re-solving, a sound local strategy computation that only considers situations as they arise during play. This lets DeepStack avoid computing a complete strategy in advance, skirting the need for explicit abstraction.

During re-solving, DeepStack doesn't need to reason about the entire remainder of the game because it substitutes computation beyond a certain depth with a fast approximate estimate, DeepStack's 'intuition' – a gut feeling of the value of holding any possible private cards in any possible poker situation.

Finally, DeepStack's intuition, much like human intuition, needs to be trained. We train it with deep learning using examples generated from random poker situations.

DeepStack is theoretically sound, produces strategies substantially more difficult to exploit than abstraction-based techniques and defeats professional poker players at heads-up no-limit poker with statistical significance.

Download

Paper & Supplements

Hand Histories

Free Heads Up Poker Game

Members (Front-back)

Michael Bowling, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, Viliam Lisý, Martin Schmid, Matej Moravčík, Neil Burch

Heads Up Poker Game Free

low-variance Evaluation

Heads Up Poker Casino Game

The performance of DeepStack and its opponents was evaluated using AIVAT, a provably unbiased low-variance technique based on carefully constructed control variates. Thanks to this technique, which gives an unbiased performance estimate with 85% reduction in standard deviation, we can show statistical significance in matches with as few as 3,000 games.

Abstraction-based Approaches

Heads Up Poker Game

Despite using ideas from abstraction, DeepStack is fundamentally different from abstraction-based approaches, which compute and store a strategy prior to play. While DeepStack restricts the number of actions in its lookahead trees, it has no need for explicit abstraction as each re-solve starts from the actual public state, meaning DeepStack always perfectly understands the current situation.

Professional Matches

We evaluated DeepStack by playing it against a pool of professional poker players recruited by the International Federation of Poker. 44,852 games were played by 33 players from 17 countries. Eleven players completed the requested 3,000 games with DeepStack beating all but one by a statistically-significant margin. Over all games played, DeepStack outperformed players by over four standard deviations from zero.


Heuristic Search

At a conceptual level, DeepStack's continual re-solving, 'intuitive' local search and sparse lookahead trees describe heuristic search, which is responsible for many AI successes in perfect information games. Until DeepStack, no theoretically sound application of heuristic search was known in imperfect information games.

','resolveObject':','resolvedBy':'manual','resolved':true}'>
','resolvedBy':'manual','resolved':true}'>
','resolveObject':','resolvedBy':'manual','resolved':true}'>




broken image