Future Mobile Technology

Future Mobile Technology – What Changes in Tomorrow’s Devices?

Future Mobile Technology

As part of our continued look at mobile trends, we look at some of the biggest developments in the space of future mobile technology. In our previous article, we spoke about the trends of divergence, convergence, data and consumption. This time, we’ll focus more on the specifics of the next wave of technology in mobile and their implications in the mobile space. Three overlapping, and complementary, developments lie in 5G, edge computing and changes in how we interact with devices.

Implications of 5g

There is unquestionably a degree of uncertainty about 5G and what it means for everyday mobile users and opinions vary about what the impact could be. 5G has been termed as the G for industry – an enabler of lower latency which spurs on the Internet of Things, automation, sensors and autonomous vehicle. For the everyday mobile user, 5G seems to promise a significantly quicker 4G in effect. Here are some things which it promises nonetheless:

  • The Unknown – The new capacity will be filled by the unknown; richer apps, more video experience and better games. For 3G and 4G, we didn’t know what those next waves of apps were. Along came Snapchat, TikTok and mass subscription streaming for mobile.
  • Games Streaming – Lower latency, coupled with increasing device graphics and processing, will open up a new wave of gaming opportunities. Google’s Stadia is an example of a the new wave of subscription gaming platforms where the ‘heavy lifting’ is done in the cloud and devices essentially require a stable connection.
  • Video Consumption – 4G was seminal in increasing consumption of video across social media and facilitating video subscription platforms. 5G points to an increase in that consumption again, with more consistent quality of streams. The concepts of “downloading” and “buffering” shift to becoming increasingly obsolete words in the context of video.
  • Immersive Experiences – Augmented and Virtual Reality’s potential will be unlocked by the deployment of 5G. For both, high latency responses between can induce motion sickness and has been a hindrance in the capability to design great VR. 5G’s Ultra Reliable Low-Latency Communications (URLLC) presents an new canvas for developers to design new experiences that allow both technologies to be used more widely, and potentially for longer durations.

Processing on the Edge

Edge processing brings computation and data storage closer to the geographical location and reduces the need to transmit large volumes of data back to central data centres. This, at face value, has an interesting contrast to 5G which promises higher bandwidth for transmission of data and should by rights ease the process. In reality, these two developments are complementary and serve to reduce traffic on the network and improve the speed for the user on device.

What does this mean for mobile? As mentioned, processing on device eliminates latency. The efficiency is better from the perspective of performance and energy saving, which lends itself to great immersive experiences.

There is also another element with edge processing – privacy. A distributed network reduces the damage of a centralised database being compromised if a significant volume of insights are processed downstream and never make it there in effect. New capabilities and an increasing societal consciousness about privacy has also brought about federated learning – machine learning models for insights deployed straight to devices. By processing more personal data which is locally to the mobile, the same insights from customers can be derived in a more pseudonymised way while protecting the anonymity of individuals.

Mobile interfaces

Mobile devices haven’t changed all that much in appearance since the arrival of the aesthetic of the modern smartphone with touchscreens. While many have noticeably larger screens with quality and refresh rate improving, the lack of innovation in the interface or form factor begs the question why we should continue to upgrade devices as improvements become increasingly marginal. The quest for the new form factor has led to a series of failed devices with folding screens as the early models don’t prove robust enough or fall foul of user error. Outside of what simply amounts to a bigger screen when displayed, the way which we interact with the device remains the same.

Of more interest are developments away from the screen. While not strictly mobile, Amazon’s explorations of the Buds, Frames and Loop – a series of wearables which attempt to bring voice into the wider world. They are invite-only and beta products in effect – for Amazon, it’s as much a question of what kind of tasks people would do potentially within earshot of others and through voice as an interface.

Apple have historically waited to perfect their version of technology before launching to market, striking a sweet spot between timeliness and maturity while delivering a great user experience. Based on patents and an inadvertent leak in the official “golden master” of iOS 13, we have seen early indicators of AR glasses that will serve as a standalone headset in addition to an accessory to the iPhone.

The idea of AR/VR and voice being complementary to mobile experience has a direct parallel – mobile and television. According to Mary Meeker, whose annual trends we have written about before, the number of hours people spent on mobile surpassed TV last year for the first time ever – 226 minutes for mobile versus 216 minutes for TV. While they are competing in one sense, we are also aware that the amount of time people spend using two screens has increased dramatically with roughly 88% of Americans use a second device while watching TV.

What now?

As always, we have an interest in speaking with companies with capabilities in any of this future mobile technology. At Alpha Hub, part of our mission is to accelerate the adoption of emerging technologies across Flutter Entertainment. If you’re working with innovative technologies that you think could support our global brands such as Paddy Power, Betfair, Sportsbet or FanDuel, we’d love to hear from you.

Betting On Sports Conference 2019

The 4th edition of the Betting on Sports Conference ran in the London Olympia from the 17th to the 20th September. In total, 3500 delegates attended seminars, presentations and panel discussions from over 300 industry experts. There were also 120 industry exhibitors with stands across the event. Given the growth of the conference and growing audience internationally, the next European edition will be run out of Barcelona. The conference presents a great opportunity for us to reflect on some of the major trends ongoing in the industry right now. We have covered some of the major talking points from the 3 day event below.

The Repeal of PASPA and the US Opportunity

With the US now deregulating gambling on a state-by-state basis, opportunity is presenting itself to international operators who are heading towards the new market. Europe is very much a red ocean – intense competition, increasing taxes and growing regulation is affecting bottom lines of incumbents in the market and making it difficult for newcomers to take market share. The conference comprised of a lot of technical sessions giving insight into the considerations for smaller operators trying to enter the US market. As a result of this, it is estimated that 90,000 new jobs will be created in the market. Everyone is looking for talent – regulators and operators are looking for technical compliance roles which are in huge demand. The response to this is people being shipped in from EU for half-year durations as only 6-month visas are being issued in many cases. This is a tactical not a strategic solution.

The overall view from CEO panel is that unless you are big and have capital, stay out of US right now. The are several challenges to overcome in working through state-by-state regulation and in certain states, particularly high entry costs and subsequent taxes. While to date the US customer is proving to be far more brand loyal than in EU or Australia, they come with a much greater cost of acquisition than in both territories. Acquisition costs can be greater than $500 circumstantially.

As far as the shape of the market, there are some interesting differences in sports followed. While the major league sports were all to be expected, tennis is likely to be the 4th highest revenue stream in the near-term overtaking NHL. Early on retail is prominent for a number of reasons – approximately 50% of payments are failing in the US right now and retail in the US is more of an entertainment location relative to retail in the UK as transactional. Into the medium term, In-Play will start to become more prominent in the sports betting landscape, something which isn’t available over the counter.

Other Markets of Interest

Latin American countries are opening for sports betting and the approaches adopted by operators vary to enter these emerging markets. Many are opting to partner with existing companies and supplementing offerings with a land-based presence for visibility.

Columbia is the only fully regulated online gambling market in LATAM at the moment but other countries such as Brazil and Mexico are following suit. The existing regulations in place in Brazil and Mexico are outdated and, in most cases, it does not cover online betting. The common opinion was that these markets are high potential and expected to grow at a faster rate. Panellists suggested that LATAM markets would be a natural extension for European operators. There are 19 licensed operators in Columbia currently.

Africa is an emerging market in the gambling space. Logistically, many parts of the continent have a well-established mobile payments and online banking options. That mobile ecosystem will make payments for an online presence significantly more convenient.

India presents an interesting opportunity as a nation with a fanatical cricket population and heavy mobile penetration – more than 100 million mobile users tuned in to Hotstar, an on-demand streaming service owned by Disney, on June 16, the day India and Pakistan played against each other. The question of monestising that demographic remains the most pressing question but access to them through mobile channels is certainly there.

M&A and Market Complexion

The main reasons for mergers and acquisitions from the large players to date are as follows:

  • Market Access/Localization
    • Need to be local enough be accepted
    • Need to overcome language and cultural barriers
  • Cost efficiency
  • Adding different expertise/products to the portfolio
  • Faster time to market

Increasing regulation accelerated mergers and acquisitions in gambling as firms seek critical mass to offset the rising operational costs and erosion of margin. Mid-tier players continue to vanish as they can’t handle the regulatory pressure.

The US market is underpinned by equity investors – there are a number of equity investor looking to grab investment opportunities in betting. Currency devaluation also helped M&A’s as it brought down the valuation of assets in weaker currency states. They are also looking out at cheaper assets in Europe due to currency devaluation and tighter regulation impinging on market value of operators.

Data is an Opportunity

Data is a significant enabler for the next wave of insights and products in this industry. The way which we procure data can change into the future with the likes of computer vision automating swathes of what is currently manual.

It can give richer insights and more data points makes for better pricing accuracy and the ability to run more markets fed with dynamic datapoints. Machine learning will underpin a huge amount of discovery within these newly available datasets.

As part of the merger landscape, there is a growing concern about implications of monopolies amongst data providers. There is ongoing litigation regarding exclusivity of access to data for football in England currently. While these data providers are improving their suite of products and the depth of their offerings, competition is of the utmost importance to the industry for keeping pricing and access reasonable for operators.

What now?

As always, we have an interest in speaking with companies with capabilities or technology in any of these spaces. At Alpha Hub, part of our mission is to accelerate the adoption of emerging technologies across Flutter Entertainment. If you’re working with innovative technologies that you think could support our global brands such as Paddy Power, Betfair, Sportsbet or FanDuel, we’d love to hear from you.

Disruptive Innovation – It’s Not Just Disrupt Or Be Disrupted

Disruptive innovation is being disrupted.

In 2014, Jill Lepore of The New Yorker took aim at Clayton M. Christensen’s seminal work on disruptive innovation, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. The selective use of case studies, glossing over of other economic factors and limited success Christensen himself had using the theory to forecast predictively were all presented by Lepore as issues with the work. Certainly, Christensen had quite public failures using his theory which made him open to attack; the collapse of a $3.8-million Disruptive Growth Fund in less than 12 months did him no favours. Then in 2007, he told Business Week that “the prediction of the theory would be that Apple won’t succeed with the iPhone” before adding, “History speaks pretty loudly on that”.

Christensen, like he taught any disrupted incumbent to do, fired back in a 2015 article titled “What is Disruptive Innovation?”  in the Harvard Business Review. He was no less scathing in his response: “There’s another troubling concern: In our experience, too many people who speak of “disruption” have not read a serious book or article on the subject. Too frequently, they use the term loosely to invoke the concept of innovation in support of whatever it is they wish to do.” The serial misclassification of what is disruptive innovation has diluted the effectiveness of Christensen’s strategies to counter it. Understanding and differentiating what constitutes sustaining innovation, a more moderate progression of technology which doesn’t evoke the same emotive response, is important as part of this.

Explaining the misbranding of cases with his terminology, Christensen pointed to Uber as something which didn’t fit within the parameters. A disruptive innovation is one which “creates a new market by providing a different set of values, which ultimately, and unexpectedly, overtakes an existing market”. His contention was twofold; disruptive innovations originate in low-end or new-market footholds and don’t catch on with mainstream customers until quality catches up to their standards. Googling the topic, there are a series of retorts to him pointing to UberSELECT, or the prohibitive waiting times for customers of those first wave of taxis to join would both make Uber fit the criteria of his theory. Those arguments have quite a pedantic feel, telling the creator of the term something falls into his own definition.

New world, new language

Whether Uber fits the increasingly narrow lens which Christensen defines as disruptive innovation ultimately amounts to semantics. A more fundamental issue lies in using 20th century business lexicon and playbooks for 21st century internet businesses. The contention that it’s necessary to establish whether a company picked off low-end customers first to define it as disruptive is as problematic as it is aimless. We reside in world which has fundamentally changed from the one The Innovator’s Dilemma was written for and platform businesses have changed the medium of attack in many cases. Historically, the vertically integrated supply chain was subject to attack at various points. Internet businesses increasingly have a virtually integrated supply chain.

More interesting still is the idea of left-field disrupters who have access to your existing customer base. As industry convergence becomes more pronounced, that is the ability for companies from one industry to cross into another predominantly through digital channels, challengers are coming from new places. Amazon are an increasingly threatening proposition and, case in point, recently started ventures in insurance products and selling flights. Their ability to cross those lines is further supported by a suite of other assets – they can facilitate payments, have extensive cloud capabilities, streaming platforms and a significant physical distribution network. While this may be one of the largest companies in technology, it doesn’t change the fact that the landscape for disruption has changed for everyone. If the approach is no longer a piecemeal erosion of your customers but amounts to full scale assault, then finding terminology to address it more appropriately would make sense.

Big-Bang Disruption

Big-Bang Disruption” was the term which Larry Downes and Paul Nunes coined in an article by the same name in the Harvard Business Review in 2013. Big-bang disruption happens at such an accelerated pace, by being created and distributed through online channels, those conventional strategies prescribed based on original disruption literature fail.

“But perhaps the biggest challenge to incumbents is that big-bang innovations come out of left field, combining existing technologies that don’t even seem related to your offerings to achieve a dramatically better value proposition. Big-bang disrupters may not even see you as competition. They don’t share your approach to solving customer needs.”

The hallmarks of these disrupters are:

  • Unencumbered development – there are more minimum viable products being launched directly into the market that can be scaled depending on their success.
  • Unconstrained growth – unlike Everett Rogers’s classic bell curve of five distinct customer segments (innovators, early adopters, early majority, late majority, and laggards), the big-bang works steeper as an early set of trial or beta users pave the way for mass adoption.
  • Undisciplined strategy – with no alignment to any of Porter’s strategic goals of operational excellence, product leadership or customer intimacy, a redefined digital offering can achieve competitive advantage immediately and all value propositions from the beginning,

Those traits make it extremely difficult for an incumbent to survive disruption – why would customers not take a better offering on every levels?

Reductionist history

In 2000, Reed Hastings, the founder of Netflix, approached Blockbuster CEO John Antioco about a partnership. A simplified history tells us that Antioco refuted the offer for Netflix to run an online brand for them would result in a 2010 bankruptcy while the plucky upstart became the behemoth of online streaming – classic disruptive innovation one might say. While online would ultimately provide a new platform and market for Hastings to overtake physical video stores, there was no reason for Blockbuster to cede the strong brand recognition and customer base they had built at the time. However, Blockbuster’s existing model had a flaw by design – they were massively reliant on late fees. This was an annoyance to customers but accounted for about $200 million annually in revenue. In 2004, increasingly concerned by the threat of Netflix and Redbox, Antioco proposed a significant overhaul to secure the future of the brand. He proposed dropping those late fees and making a $200m investment to launch Blockbuster Online. Carl Icahn, an investor in the company, questioned the decision-making and leadership of the CEO because the strategy would significantly hamper profitability. The loss of the board’s confidence ultimately culminated in the departure of Antioco and, eventually, the demise of the largest video rental chain in the world. When viewed through this different lens, Netflix weren’t quite the unopposable disruptive force they were made out to be. Blockbuster had ample time and opportunity to respond but failed to do so appropriately.

Seagate Technology, famously displaced in Christensen’s hard-disk drive example, were never felled by disruption. Despite their allegedly fatal error to delay manufacturing 3.5-inch drives in the mid-80s, which were valued by producers of portable computers and laptops. Between 1989 and 1990 its sales doubled, reaching $2.4 billion, “more than all of its U.S. competitors combined”. Seagate had $11 billion in revenue in 2018 still predominantly selling hard drives. As Jill Lepore pointed out:

“In the longer term, victory in the disk-drive industry appears to have gone to the manufacturers that were good at incremental improvements, whether or not they were the first to market the disruptive new format. Companies that were quick to release a new product but not skilled at tinkering have tended to flame out.”

For all the effectiveness of those companies displacing one another in the 1980’s only to collapse when the next arrived, was there not a question why none of those companies could replicate their initial success? That adage of “staying too close to your customer” probably applied to the company which served I.B.M. at the time, who insisted that they only wanted better and faster versions of the 5.25-inch drives – Seagate Technology.

It’s not just disrupt or be disrupted

Debating the definition of disruptive innovation will continue to be a topic. While interesting as a technical debate, it must be tempered by understanding the infrequency that monumental displacement within industries occurs regardless of definition. As we look at the converging landscapes and new markets of digital business, there’s a growing understanding that competitors can cross lines and, distributing digitally, will instigate big-bang disruption. That said, robustness of business models and the ability to provide consistent improvements on products and services remains of paramount importance. A new competitor’s ability to pick off underserved customers begins with a question of why those customers were underserved in the first instance. Sustaining innovation seems to be quite important in that context. The idea that everyone needs to be the disrupter or stands to be disrupted is folly. There isn’t always a wolf lurking but, if there is, it’s probably because you left the gate open. At the end of the day, serving your customers well matters.

Predictive Analytics in Sports – Riding the Big Data Wave

With the constant pressure to improve performance, it is of little surprise that predictive analytics in sports is such a hot topic. As data capture and analysis technologies evolve, the quest for greater, more accurate and real-time insight looks set to continue. 

“Data is the new oil” is both an oversimplified and nuanced description of the current state of play in the world of insights. Those peddling the mantra most are probably unaware how wildly inaccurate it is in the same way as they don’t understand how succinct it could be.

  1. Data will, much like oil once did, power a wave of new transformative technologies – artificial intelligence, automation and advanced, predictive analytics.
  2. Data, much like oil, requires refining to extract the value of a trapped asset because in its rawest form it does very little to power the machines we want it to.

At this point we should stop, because perpetuating the analogy of “data is the new oil” isn’t helpful.

  • Limited transferability – If I take a barrel of crude oil from the Green Canyon in the Gulf of Mexico or the Fateh Oil Field off Dubai, they’ll achieve the same thing. In that sense, they’re worth the same to whatever refinery they ship to. But while the data which Uber holds has an intrinsic value to Lyft, it probably doesn’t hold the same value for Walmart, AT&T or Netflix.
  • Not a finite resource – Using data once then assuming its usefulness has been depleted would certainly be a mistake because it’s not a finite resource.
  • Ease of extraction – The world’s data is not becoming more difficult to source. It’s not getting more expensive to extract (the inverse is true of data in fact).

What data can achieve relies far more on the circumstantial. Those who use it best circumstantially are reaping the rewards accordingly, something particularly true in the world of sport.

The new wave

Unquestionably, this influx of data has forever changed the world of sport. Gone is the era of the pint-swilling, chain-smoking Premier League footballer. The curtain comes down on the era of portly pitcher Bartolo Colón. The John Daly physique isn’t a common sight amongst participants in golf majors. Today it seems intuitive that athleticism will pay dividends for athletes.

In the same way as businesses’ ability to collect and interpret data can be a significant contributor to their success, the world of sport has seen an overhaul in its approach to sports science and decision making. That, in its first wave, changed the nutrition and physique of athletes.

Over the past two decades, an industry which predominantly relied on intuition has become increasingly data-driven. New technology makes it possible to track, quantify and analyse almost everything athletes do in training and match environments.

Decision-making off the field has increasingly found itself driven by analytics too. While GPS tracking, heart rate monitors and laser gates quantify on-field performance, we have machine learning trying to unearth the next superstar, find a competitive advantage for coaches to implement or even gauge fan engagement translated to season ticket sales off it. All told, professional sport feels like it has fully embraced statistical rigour by embracing data.

The dawn of predictive analytics in sports

So, what did the first sports statistic look like? A reasonable contention would be batting average in cricket, something has been used to gauge cricket players’ relative skills since the 18th century. This involved capturing and aggregating individual scores for all players and dividing by the number of games they played. Henry Chadwick, an English statistician raised on cricket and dubbed the “Father of Baseball”, took this to develop the batting average (BA) in baseball along with earned run average (ERA) in the 19th century.

For all that early invention, often progress can be marred by setbacks. One of the most famed errors of early sports data was that of Charles Reep who, through a phenomenal misinterpretation of his own statistics which he gathered pitch side, concluded that most goals were scored from fewer than three passes.

It was, after all, a simple error which should haven’t all that much consequence? Not quite – his mistake resulting in the invention of the long-ball football which marred England from the 1950’s for the next half century and beyond. Jonathan Wilson, in Inverting the Pyramid: The History of Football Tactics, said of the misinterpretation:

“It is, frankly, horrifying that a philosophy founded on such a basic misinterpretation of figures could have been allowed to become a cornerstone of English coaching. Anti-intellectualism is one thing, but faith in wrong-headed pseudo-intellectualism is far worse.”

It stands testament to the importance of interpretation of data as much as anything else. Conversely, a more recent success has added “analytics” to the lexicon of professional sports everywhere. Captured in Michael Lewis’ 2003 book, Moneyball: The Art of Winning an Unfair Game, the story of the Oakland A’s success in building a team of undervalued talent through sabermetrics, the empirical statistics of baseball so keenly studied by the Society for American Baseball Research (SABR). It has inspired mass adoption of statistical analysis and imitation of their approach throughout Major League Baseball and across so many other top-level sports.

Ignacio Palacios-Huerta, professor at London School of Economics, aided Chelsea’s preparation for their 2008 Champions League penalty shootout by providing information of the tendencies of the Manchester United players. He correctly anticipated Cristiano Ronaldo’s stuttered approach to his penalty, which Petr Cech saved as a result, and that Edwin Van der Sar was far more competent diving to his right. The issue; asides from some bad luck in John Terry slipping with a chance to win, was Nicolas Anelka deviating from the plan and placing his shot to Van der Sar’s right where he comfortably saved it.

What Moneyball was to the Big Data revolution, Astroball is to the tale of modern sporting analytics and where the landscape has advanced to. The book by Ben Reiter covers the Houston Astros being rebuilt during an historically bad three years, which made them one of the worst teams ever in professional baseball.

Reiter approached them as a journalist asking how a team could be so consistently bad despite an assembly of brilliant minds like Sig Mejdal, who was previously at NASA, and Jeff Luhnow, a management consultant from McKinsey who had succeeded with the rival St. Louis Cardinals.

The part which makes it particularly compelling is Reiter, after seeing the “process” first hand, ran a cover story in 2014 in Sports Illustrated proclaiming the Astros as winners of the World Series in 2017 – something universally derided which turned out to be prophetic. “The Nerd Cave”, of which Mejdal took charge, gave the Astros a consistent edge in pitching and recruitment by moving away from the dichotomy Moneyball creates between scouts and analytics to embrace and metricise the biases and heuristics of intuition. It represents an astonishing leap in a short space of time where the intuition so reviled by statistics has been embraced to quantify evaluation of ability by eye.

Where to next?

The world of sport is doing well in terms of interpreting the volume of data it has to hand. When a football club tracks its players’ GPS data, it can tell where they are throughout the 90 minutes. When it comes to the opposition however, there’s a limitation. Most publicly available football data works on on-the-ball action – think the likes of passing completion, tackles, interceptions, expected goals (xG) and expected assists (xA).

The next frontier is in what happens off the ball, something Mladen Sormaz and Dan Nichol presented at the OptaPro Analytics Forum 2019 in “Quantifying the impact of off-the-ball movement in football”.

We now look to the delayed run or dropping player to capture the shape damage to the defensive team that is created by the movement of the attacking team.

What technology makes this practical?

Outside of having access to GPS universally available, computer vision promises a lot in the space – that is gaining high-level understanding from videos by using methods including machine learning to identify objects within frame.

The ability to capture all of those tens of thousands of data points directly from a feed is interesting and the additional scale allows far more questions to be asked in significantly larger data sets.

Reinforcement Learning (RL) will hopefully answer that – a field of machine learning where the programmer sets the parameters and RL seeks to interpret what the maximum return is from whatever set it’s presented. Those near open-ended questions are hugely interesting when facing enormous data sets as the model’s ability to ask various questions the programmer mightn’t know to ask has the potential to unearth incredible, potentially unsought insights.

What does this all mean?

As technology and predictive analytics becoming increasingly prevalent, sport will continue to evolve, and team, fans, media and other industries will find new uses for this data.

In the simplest terms, this could mean better insights and more of them. New links of cause and effect. New statistics to metricise performance. New on-base percentages (OBP) which revolutionise sports by finding undervalued players like those in Moneyball.

In Flutter’s world, those statistics promise a lot for our offerings. The ability to garner more real-time insights creates the possibility for new markets offered more on a play-by-play basis and a richer live experience.

Regardless of the unknowns, we stand on the cusp of an exciting time for discovery in professional sport.

If you’re building innovative new solutions to collect and analyse large volumes of data, we’d love to hear from you.


Future Mobile Trends

Future Mobile Trends – Rethinking Mobile for 2020

Given how far we’ve come since the early days of the modern smartphone almost two decades ago, what’s next for the future of mobile technology? Looking at a global market that feels increasing saturated, what scope is there for growth, what happens with the devices themselves and how does the ecosystem around them change? This post examines the four future mobile trends that are set to have the greatest impact.

To paraphrase Benedict Evans of Andreessen Horowitz, mobile has eaten the world. A quick look at the numbers gives us a sense of the omnipresence of mobile:

  • Global population over 15 years old – 5.3 billion
  • Mobile phones – 5 billion
  • Smartphones – 4 billion

While in absolute terms 4 billion smartphones sounds significant against the global population, we must temper the figures slightly by acknowledging an uneven distribution globally and that not all of those phones are connected to mobile internet. Nonetheless, there’s certainly enough of them there. For a sense of how much they’re being used, the numbers again paint a picture:

  •  In 2018, 52.2% of all website traffic worldwide was generated through mobile phones
  • Americans now spend more time on their mobile devices than they do watching TV.
  • Last year, mobile sales accounted for nearly 40% of all retail eCommerce sales in the US.

Future Mobile Trends

With this in mind, we believe there are four distinct trends that will shape the future mobile technology ecosystem: divergence, convergence, data and consumption.


In line with Moore’s Law, mobile devices have seen a steady improvement in processing power and memory, particularly in top end devices. Other things like battery life, cameras, screen resolution and graphics processors are similarly improving.

However, these improvements are not consistent across all devices in all markets. For low-end devices, like the stock of new affordable Android phones targeting emerging markets, compromises are being made in design and performance with cost in mind.

The refresh rate for mobile devices has also slowed, leading to a subsequent slowdown in phone sales. The average iPhone lifespan is now over four years and around two years for Android devices. And despite the era of the $1000 phone having truly arrived, the average cost of a phone globally is decreasing.

The divergence in top-end devices vs. lower-end devices is a consideration for developers; how do we cater for all? Google have launched a series of lighter touch apps to cater to the limited performance of low-end devices. YouTube adjusts the quality of your video stream based on your viewing conditions in another incidental bit of help for devices.


The mobile phone has become a natural point of convergence for virtually all other digital technologies. For the day ahead, you no longer need to check if you have your diary, digital camera, MP3 player, calculator and newspaper with you. If you’ve got your mobile, you have them all.

The expectation is that this trend will continue to persist into the future. For example:

  • New Realities – The mobile is a lens into another world already, but greater inclusion of VR and AR technologies will be an extension of this.
  • IoT Control – A more sensor-laden environment with smart devices makes sense for mobile to be the touchpoint for interacting with them – think of how the Alexa App works now from your device and extrapolate to a smart home or office environment.

Underpinning it all is machine learning which can perform more intelligent and intensive tasks on device.


The future of mobile data comes through 5G, the fifth generation of cellular network technology. Early tests have shown browsing speeds being up to 23 times faster and download speeds improving to be 18 times faster for the majority of users.

In 2018, global mobile data traffic amounted to 19.01 exabytes per month. By 2022, with all of that extra speed available, mobile data traffic is expected to reach 77.5 exabytes per month worldwide at a compound annual growth rate of 46%.


The immediate question then is how do we fill that extra bandwidth?

Again, I’d defer to Benedict Evans:

“In 2000 or so… it seemed as though every single telecoms investor was asking ‘what’s the killer app for 3G?’ People said ‘video calling’ a lot. But 3G video calls never happened, and it turned out that the killer app for having the internet in your pocket was, well, having the internet in your pocket.

If 3G was the birth of apps, the fatter pipes of 4G allowed to massive video consumption and the rise of streaming. 5G, we can safely assume, means existing applications will get richer again. There also promises to be a new wave of Snapchats, TikToks and Instagrams that we can scarcely imagine, enabled by 5G speeds in the same way as the exponential growth of those enabled by 4G.

The increasing consumption if rich media is also something that Mary Meeker specifically calls out in her internet trends report for 2019. Read our analysis here. 

What else for the future of mobile technology?

The folding phone seems to have been the notable hardware failure of 2019 in the mobile space as a number of major manufacturers have promised but failed to deliver effectively enlarging screens.

However, the smartphone interface feels like it’s overdue a change with no massive leaps happening since the touchscreen first appeared, and foldable screens at least show promise of progress.

Voice as an interface also shows potential with virtual assistants like Siri and Google Assistant gaining traction.

One way or another, the mobile persists in being central to our everyday lives; it consumes, interacts and holds more of the world around us. And taking these future mobile trends into consideration will help us to design and build the next generation of apps to maximise the potential of the devices.

If you’d like to stay up to date with the latest Alpha Hub news, events and research, register with your email address below.