January 9, 2013
In this post we’ll take a look at behavioural economics and decision making – in particular, we’ll discuss what causes an individual to participate in a particular gamble and how the observed behaviour may not map onto that predicted by ‘rational’ decision making criterion. These decision making criteria state that a rational gambler will enter a game if (and only if) the price of entry is less than the expected value of a winning result. This expected value can be calculated as the sum of the expected pay-offs for all the consequences. So a player should enter a market on Smarkets if the expected result is greater than that invested. However as shown below, this rule is not always followed.
The ‘St. Petersburg Paradox’, a game first proposed in 1713 by Nicolas Bernouli, is one such example. According to Bernouli, you must imagine you are standing in St. Petersburg Square when a man approaches you to play the following game:
I toss a coin until it comes up tails. When this happens the game ends, and you get paid depending on the number of times that the coin has landed on heads prior to landing on tails. You get paid £1 if the coin comes up heads on the first toss, £2 if the coin comes up heads on the second toss, £4 on the third toss, £8 on the fourth toss and so on. The amount that you stand to win as a player doubles for each successive heads result, and then ends when the coin lands on tails.
The question then, is how much are you willing to pay to play such a game as this?
We can calculate the expected value of the game as the sum of the expected pay-offs for all of the consequences. This is where the paradox occurs – the potential pay-off for this game is huge. In fact, it is infinite. Following on from the first coin toss, you have a 50% chance of winning £2, plus a 25% chance of winning £4, plus a 12.5% chance of winning £8 and so on. This continues such that if the game lasts for 30 throws then a player will stand to win £537 million, and £563 trillion on the 50th throw.
If we accept this criteria then we must therefore accept also that humans are being irrational in not paying large sums to play this game. Who wouldn’t want to risk playing this game given the huge payout potential? But recent studies have shown that most players are not willing to pay more than £6 in order to play. This doesn’t align with our initial statement that
It is suggested that the reason for this difference is in the way that individuals think about profits and loss. It is important to note that for the player paying £6 to enter this game the most likely outcome is that a tail will occur in the first two throws and the player will lose money. It is not surprising that a player will be unwilling to pay a lot to play a game where the chances are greater that they will lose more than the amount paid to enter. It is also important to note that humans have been suggested to discount the profits made from overly unlikely events and so the potential larger winnings from this game may not be taken into account when deciding how much to pay to play.
Another explanation that has been suggested is that no one can actually pay out the largest sums available in this game, and so we need to cap the maximum payouts. This can have a huge effect of the expected value of the game.
Lets say a player expects the stranger to only be able to payout a maximum of £100. The expected value of the game becomes just £4.28. Lets say the stranger is a very wealthy businessman with a maximum payout of £1,000,000. In this case the expected value becomes slightly higher but not by much – it’s still only £10.95. Finally if the game were capped at £8.5 Trillion (The GDP of the United States in 2007) then the expected value of the game would be just £14.13.
When the actual expected values are taken into account, it’s not difficult to see why players are reluctant to put down huge sums of money. They are, in fact, not being that irrational afterall.
December 10, 2012
Measuring risk is inherent in every decision we make. Usually we don’t even notice that we’re making a risk-assessment when, say, stepping into a lift or crossing the road, but we do so on a second-to-second basis all the time. In spite of the near unconscious level at which we process these decisions, the fact remains that for each of them our brains have made a complex analysis of the different variables of which we’re aware within a given situation and decided on a particular course. We’re astoundingly good at this, given we need to do it on a minute-by-minute basis every day of our lives, keeping track of huge numbers of minor details without really thinking. However, we are by our nature fallible and do often get things wrong.
Given how important the accurate estimation of risk is to both the financial or betting industries on both an institutional and individual level, it’s important to understand how this is accomplished. It’s also useful to consider the potential pitfalls of inaccurate assessment, as well as how such errors can be avoided.
How Do We Measure Risk?
As previously mentioned, most of us make nearly unconscious risk-assessments all the time in our heads. This usually works well for everyday situations, but we all have our own biases, fallacies, and irrational ways of reacting to events to which we can fall victim in decision-making. So when it comes to important decisions such as in what way to trade millions of dollars of shares or which medical treatment should get government funding, we need help maintaining objectivity and rationality. Using statistical models to analyse which is the best outcome as well as how likely negative outcomes are allows us to disregard our own biases and assess decisions objectively, thereby allowing us to make better decisions. Essentially, these models are prosthetics which augment our ability to make the right call in situations too complex for our normal faculties to handle effectively.
We construct these models by gathering data, whether through scientific enquiry or passive methods such as records of stocks traded, which are publicly accessible. When we have enough data, we can analyse it and draw conclusions to apply as needed. There are a huge number of statistical tools that allow us to do this, and while I’m no statistician, at their base level many are easy to understand. Take the Law of Large Numbers, for instance, which states that a large enough sampling of data from the same set will tend to converge around the “expected value” (the weighted average of all possible values for the variable being measured). In essence, it means that the more times we repeat the same test, the closer we come to knowing the truth. For instance, the expected value for rolling a standard 6-sided die is 3.5; the more times we roll the die, the more closely the average of the results found will converge on that number.
There are numerous other powerful tools we can bring to bear on problems of assessing risk, such as Bayesian Probability, and they are all used in the assembly of models used for just that purpose. These models, in turn, allow the creation of complex algorithms and computer programs used to trade on financial markets worldwide. In fact, the vast majority of trading on US stock exchanges is done by programs known as “algo-bots.” This is both hugely positive and introduces enormous potential flaws into the system upon which the entire global economy is now based, as without any human oversight, programs can cause huge problems when they encounter situations they’re unable to handle.
I’ve written about the problems associated with exclusively using theoretical models to make decisions before, but a good example of the problems which occur when things go wrong is the “Flash Crash” of 2010. On the 6th of May that year, at 2:32pm, a mutual fund’s algo-bot attempted to trade $4.1bn worth of a particular type of futures commodity, a huge sale by any standard, which combined with increased volatility in trading that day to trigger a selling spree by other automated programs in the same market. As selling worsened over the course of the next 10 minutes, the fever spread to the stock markets and caused the Dow Jones Industrial Average to tumble by 600 points in a matter of minutes. The fall was only stopped when an automated monitoring program kicked in and stopped trading for five seconds, alleviating the pressure of so many programs selling and allowing the markets to start recovering (which itself took only minutes). Five seconds may not seem like much time to you or I, but in the time-frame of High Frequency Trading and algo-bots it’s more than enough time for such a precipitous drop to begin to reverse itself. By 3:00pm most of those 600 points lost had been regained, marking the largest one-day swing in stock market history.
This incident, short and sharp, which caused the largest one-day swing in the Dow Jones’ history, was entirely automated and perfectly illustrates the problem with inaccurately modeling risk. After Black Monday in 1987, where a very similar problem occurred with automated trading programs, the risk of using the primitive trading programs of the time (which accounted for only 10% of trading back then) was considered too great without human oversight. Unfortunately, our current system is built around such algorithmically-derived programs, and so one of the few things we can do is build safeguards like the system which stopped trading during the Crash. It also highlights the issue of inadequate modeling in the basic design of the algorithms governing the bots’ behaviour. It isn’t possible to anticipate every single situation using a model derived from past data, but it’s definitely possible to include situations such as the Flash Crash -which lie at the far end of the normal distribution of probabilities- in that model.
Modeling risk when betting on live events is arguably less easy than doing so with pure data placed at something of a remove from the real world, such as the financial markets. That being said, the potential losses are more limited, you are for instance unable to lose $2.1bn betting on horses (though I’m sure there are those who have tried). If the financial markets represent the most complex man-made system ever made, then consider how complex it is to assess risk statistically on betting exchanges fundamentally reliant on real world situations with a near-infinite number of variables.
The 2012 Grand National is a prime example of such unaccounted-for risk. The favourite horse, Synchronised, which had won the Cheltenham Gold Cup only weeks before and looked on track to win, fell at a notorious jump and had to be put down. At a stroke, this unanticipated event dashed the hopes and stakes of those who’d joined the consensus on its ability to win, not taking into account the dangerousness of the course or the likelihood of any one horse falling in spreading their bets.
While the incident has served to further fuel an already fierce debate over the safety of National Hunt-style racing in Britain, it also reinforces the point that even with the best modeling, it’s impossible to anticipate the chance of a particular horse falling and dying from past performance (as it obviously only happens once). The best that can be done is to include the possibility of any horse falling in a model that informs your spread on an exchange or between different bookmakers. It provides evidence for the argument that it’s fundamentally more difficult to account for risks when modeling exchanges based more firmly on the real than, say, the commodities futures markets.
The failure of a statistical modeling of specific risks in instances like the one I describe above allows us to see the limits of such an approach. While it’s always good to know the overall likelihood of an event occurring and be able to change your approach to suit that probability, the fact remains that knowing about risks and being able to truly account for them are two different things. It’s always tempting to believe we can fully control events if we understand them, but despite our best efforts that remains an impossibility. The most we can achieve is to measure and account for as much risk as we can.
December 4, 2012
People generally prefer things to be clear-cut, easily definable, and to obey the rules we set out for them. For centuries, this idea has been hugely alluring, as it allows us to believe that we can control our environment if we can understand it well enough. While this is to a certain extent true, there are any number of points where it falls down, either because we don’t have the means to –or simply can’t- investigate the particular phenomenon in question properly.
The two previous statistical fallacies most of us employ in our daily lives we covered in a previous post on this blog -the Hot Hand and Gambler’s fallacies- occur when people fail to take the laws of statistics and probability into account when making decisions. The Ludic Fallacy is interesting because it can be regarded as stemming from the inverse proposition. That is, it occurs when one applies a theoretical statistical truth to a real-world situation where it may well not really apply. As Nassim Taleb puts it in his book The Black Swan, which considered the role that randomness plays in the course of world events, doing so becomes problematic when one attempts to describe real life solely with models derived from “studies of chance and the narrow world of games and dice.
The fallacy crops up most often in gambling on events and trading in financial markets. One instance frequently cited is that of Black Monday in 1987, when stock markets in Hong Kong fell by 45.5%, taking with them markets around the world and causing the Dow Jones Industrial Average to drop 508 points in a single day. It was a huge financial disaster, and there is still debate about its precise cause. However, one factor which many believe to be a major contributory factor was the use of program trading – using computer programs to trade large numbers of securities, based on predetermined conditions. When the stock markets started to fall, the simple algorithms governing those programs’ behaviour started selling en masse, hugely exacerbating the crisis and making it much worse than it could have been.
This is a perfect example of the Ludic Fallacy at play. The companies using program trading to automate their buying and selling of large amounts of securities at the time believed that the models upon which they based the programs’ behavioural conditions could accurately predict and react to any situation they encountered. In fact, when exceptional events rendered those conditions inadequate, it caused a financial disaster on a scale previously only seen in the Crash of 1929 that preceded the Great Depression. Fortunately, the markets eventually recovered and it did not cause a similar, awful economic crisis; however, it provides a very good example of the fallacy and how disastrous its consequences can be.
Another example would be the multiple polling organisations and pundits who got the outcome of the recent US elections disastrously wrong. By the night of the election, the various organisations conducting polling, such as Gallup and Rasumussen, had the election as “too close to call”, based off their various different models used to collect and interpret data on the likely voting intentions of the electorate. There was much discussion of the polling methods and statistical interpretations used in the run up to the election. Much controversy centred around which organisation’s were the most accurate, as they tended to find different results aligning with their respective political leanings. It even led to vitriolic attacks on those who disagreed with the consensus that the race was too close to call, such as Nate Silver.
As it turns out, they’d fallen victim to the Ludic Fallacy by significantly overestimated the accuracy and utility of their polling models. As a result, the Republicans as a party, and Mitt Romney in particular, were so unprepared for their defeat that the presidential candidate had not even written a concession speech. I’m willing to bet that the companies and rich individuals who pumped billions of dollars into Romney’s failed presidential bid would count this as a disaster (even if the majority of the rest of the world wouldn’t). They took a gamble that he would win partially based on the models provided to them by polling companies, and lost billions as a result.
In The Black Swan, Taleb is perhaps too critical of the use of statistical modelling of the world. It’s usually entirely valid to at least consider how one will proceed based on what’s happened previously and the study of empirical data. However, it does give us a counter-argument to the over-use of such methods, and illustrates the need for balancing the use of statistical and theoretical knowledge with an awareness that the real world doesn’t always obey the models we create to describe it.
November 27, 2012
Betting exchanges allow customers to buy and sell the outcome of sporting or other events. If I wanted to sell England in a World Cup football match at a certain price, I’d have to place my order and wait for it to be matched by someone else. Quite often, things are bought and sold on an exchange at an almost instantaneous rate. While customers are free to buy and sell amongst each other, the professionals who provide the bulk of the underlying liquidity in markets are known as market makers.
These market makers are individuals or companies who spend their time monitoring the orders on the betting exchanges whose markets they’ve chosen to make. They place orders, as well as monitoring the orders placed and responding to them, on the market in order to ensure liquidity. In essence, they make sure that there’s a buy for every sell order and a sell for every buy order.
Market makers work using buy and sell transactions, the end goal being a balanced book and hopefully a profit. First, a market maker will use a computer-generated model that will tell them what the price of a particular contract should be. A contract is the agreement to buy or sell a particular position on something in the market. The price will be calculated using historical data and trends related to the contract in question. Once a price has been set, the market maker will lower it slightly for the bid price and raise it in equal measure for the offer price: these are the “buy” and “sell” prices, the gap between the two is also known as market spread. The prices, and the marginal difference between them, will shift accordingly as the market maker acquires information which affects those contracts.
The act of providing a place to buy and sell contracts creates a market; betting exchanges provide such an arena and tend to require market makers to help them remain liquid. Because they don’t control the markets, but rather help them move in the direction they’re taking anyway, market makers do take the risk of their contracts devaluing before they’re sold. Thus, their goal is to make more profit than loss on contracts as they buy and sell over the course of trading through tailoring their spread.
For example, let’s imagine you wanted to sell a £50 stake in England at odds of 2.64. The market maker would buy this position from you. They would subsequently put the contract up for sale, upping the price by a small amount -in this case probably £50.10 or so- to turn a profit on the contract, and it would be bought by another user. Through conducting thousands of trades like this a day, market makers can make a lot of money, despite the tiny incremental profit from each sale.
Market makers have a very high earning potential and are compensated for the risks they take by altering the prices in their spreads, similar to the example above, or through a reduction in commission charged, among a variety of different reward mechanisms. Unlike syndicate-based fundamental trading, market making is high frequency activity. As mentioned previously, when market making you’re trading hundreds or thousands of contracts daily and making small price adjustments to try and profit; with syndicates and fundamental trading, you’re doing the opposite. The important take-away here is the function and methods of market makers in a betting exchange like Smarkets.
November 13, 2012
In the gambling industry, bookmakers have traditionally been the dominant force, and have been calling the shots for punters for roughly as long as people have been betting on things. However, with the rise of the internet as a huge part of all our lives as well as the main communications medium of the modern age, that monopoly is being challenged by betting exchanges. These exchanges can and do offer better value to bettors than bookmakers, and we’re going to explain exactly how.
The crucial difference between the two is that, where bookmakers -as their name suggests- make the odds and then sell them to bettors, exchanges merely provide a service for their users to trade odds and prices with one another. Bookmakers turn a profit by slightly increasing the prices they offer for odds on events, whereas exchanges generally make money through charging commission on winnings. Both methods are valid as a way for gambling businesses to make money, however, it’s demonstrably true that exchanges usually give their customers better value in comparison.
Let’s look at a fictional football match between the Netherlands and Denmark as an example.
First, let’s take the bookmaker’s odds:
Netherlands to win: 4/7 (64%)
Draw: 11/4 (27%)
Denmark to win: 11/2 (15%)
As we stated before, they build a profit margin into the odds they offer on events; this is known as the “overround”. In this case, the percentage odds given above add up to 106%, so the overround comes to 6%.
Overrounds are the standard way for bookmakers to turn a profit, and can go as high as 20% above the “real” odds they’ve calculated. In the above scenario, they’d be making £6 on every £100 spent with them, no matter the outcome of the game. Most people who use bookmakers for their betting aren’t aware of overrounds, let alone how bad a deal they can sometimes get as a result. Considering the vast amounts of money which change hands between punters and bookmakers before a big event, that adds up very fast. As such, you’re unlikely to see an impoverished bookie.
Now, let’s examine how a betting exchange might deal with the same event. Betting exchanges don’t set odds, so in this example these odds are taken from the market on that putative event:
Netherlands to win: 1.62 (62%)
Draw: 4 (25%)
Denmark to win: 7 (14%)
Here, the odds add up to 101%. As betting exchanges don’t make money selling positions on events (because they don’t give odds themselves), but rather merely provide the service allowing people to trade positions and odds on those events, there’s no mechanism to rig the odds to produce a profit. The extra 1% on the market in this example is a product of the traders in the market very slightly over-valuing an outcome.
Betting exchanges make their money by taking commission on their users’ winnings. Any overrounds or underrounds (the same principle, but where the odds come to less than 100%) on exchanges are purely a product of the markets as composed of individuals with different opinions on events. Unlike bookmakers, they’re not set by the company to produce a particular profit margin.
There are some advantages to bookmakers over exchanges in terms of the ability of the former to take on unusual bets or odds on very specific occurrences, while exchanges can usually offer only markets that will appeal to a wide audience. However, as we’ve seen, the vast majority of the time exchanges usually offer better value to bettors through their commission-based structure and ability to generate fairer odds. Both types of gambling enterprise seek to make money, but the way in which betting exchanges do so enables them to offer significantly more value to their customers.
November 11, 2012
Smarkets is at start-ups job fair Silicon Milkroundabout today recruiting for new talent. Wanna work for us? Stop by and say hello. If you can’t make it, check out the ad below then send your CV to: firstname.lastname@example.org. We’re also still recruiting Client Services Reps.