Newy 87.8 FM Playing The Music You Know And Love
Jim’s Dairy Delites: Newcastle’s Historic Milk Bar SOLD and Undergoing Restoration Newy Staff
today11 February 2025
Late on Monday afternoon, millions of Australians lost access to the Internet, as TPG Telecom, encompassing the TPG, Vodafone, iiNet, Internode, AAPT and Kogan brands, alongside a list of Newcastle IT companies (MSPs) who are resellers of the network went down in the biggest broadband disruption of this year. An estimated 23% of Newcastle’s fixed line home and business services were affected.
Starting at about 5:30 pm, reports of the outage flooded Newcastle social media sites, user forums, and troubleshooting websites. Many customers, who rely on the telco’s fixed-broadband services for work, study, and entertainment, found themselves with no access.
The saving grace for TPG was that the outage occurred after business hours, allowing the company to avoid scrutiny from its significant portfolio of Business, Enterprise, and Government customers. Major organisations in Newcastle, such as the Newcastle Permanent Building Society, John Hunter Hospital (Hunter New England Health), Transport for NSW and Nine Entertainment are clients of TPG, however disruptions were kept to a minimum for these organisations thanks to their own redundant systems.
However Newy 87.8 was made aware that staff at Nine Entertainment, owners of NBN Television did notice the outages: their engineering team describing the situation as “links down all over the place.”, luckily this didn’t affect the 6pm NBN News bulletin.
In the past couple of years alone, both Optus and Telstra have suffered high-profile network meltdowns leading to calls for improved infrastructure, better communication, and improved redundancy across the entire Australian telecommunications sector. For many TPG subscribers, Monday’s outage felt like a replay of challenges faced by customers of other major players, and a reminder that no carrier is immune to sudden network failures.
Complaints poured in on platforms like Reddit’s r/newcastle, Facebook and the Whirlpool forums. Some users jokingly asked if TPG’s hold music was meant to be a substitute for the lack of internet entertainment, while others commented on the near-total silence from TPG. The TPG website started returning a “504 Gateway Time-out” error, and even the company’s service-status page was unreachable. Meanwhile, TPG’s call centre lines appeared to be down, leaving subscribers unable to contact support for answers or updates.
There was lots of speculation in tech-oriented communities. One prominent rumour was that the disruption stemmed from a meltdown in TPG’s Border Gateway Protocol (BGP) configuration, the system that helps internet providers around the globe know how to reach one another’s networks. Others suspected a DNS (Domain Name System) failure. Complicating matters was the fact that a small number of customers never lost their connection at all, posting screenshots of normal service while friends, neighbours, and co-workers couldn’t even load webpages.
Eventually, TPG acknowledged the outage in statements on Facebook and X. Instead of a BGP catastrophe, the company pinned the blame on a storm-related power failure at their AAPT data centre in Glebe, Sydney, around 5:15 pm, where a backup generator, intended to keep systems online during electrical disruptions, failed. This domino effect impacted TPG’s broadband and voice services across multiple states. While some households in Melbourne or Brisbane recovered within two hours, others in NSW remained offline well into the night.
Although TPG’s incident has dominated headlines this week, its scale and impact bring up memories of similar issues at Optus and Telstra in the last couple of years. These large-scale disruptions help illustrate the growing fragility of Australia’s telecom networks and the challenges of maintaining uninterrupted service in an era of data-hungry users and increasingly severe weather events.
Had TPG’s Glebe data centre suffered a major fire instead of a power failure, the outcome could have been much worse. Storm damage and a failed generator allowed restoration within hours by simply re-establishing power, but a significant blaze could have destroyed racks of critical servers, routers, and storage arrays, potentially taking weeks or even months to rebuild. In such an event, TPG’s reliance on a single site with no automatic failover might be a critical vulnerability: if the many business platforms that went down such as ordering portals, and customer management systems that rely on one facility, what would happen if the building no longer existed, TPG would be forced to restore data from offsite backups, and reroute traffic through alternative sites, steps that, in a true worst-case scenario, can lead to much longer outages. While most telcos plan for floods or power cuts, a catastrophic fire often means starting from scratch: rebuilding infrastructure, verifying network integrity, and overcoming the logistical challenges of commissioning entirely new hardware. An outage at one site should mean a carrier should be able to restore services to a disaster recovery site within an hour, not an entire evening.
Within TPG’s ecosystem, the outage was particularly jarring because of the company’s diverse portfolio. Beyond TPG-branded services, the company also supplies Vodafone home broadband, iiNet, Internode, Kogan internet and a large list of resellers via its wholesale arm AAPT. Therefore, a single data centre failure in Glebe rippled through multiple sub-brands, affecting both residential, enterprise and government users. Businesses reliant on TPG’s “Cloud Hosting” and remote-access VPN solutions were interrupted, while everyday customers scrambled to manage essential activities.
TPG’s “Fibre 1000” service is incredibly popular with medium, large, enterprise and government customers, which means if the outage occured during business hours it would have lead to significant disruption to thousands of business services, particularly head offices and larger satellite sites.
Ironically, many turned to the NBNco website for more information, only to find it glitching under a sudden avalanche of visits. Soon after, NBN Co clarified on social media that its own network was unaffected, placing the blame squarely on TPG’s infrastructure. The partial, staggered restoration of service confused matters further: while some households in Victoria reconnected in under two hours, many in Sydney’s suburbs were forced to wait until late at night.
An internal memo circulated to TPG’s enterprise clients blamed “storm-related power loss and generator failure” at 5:15pm, then the restoration of electricity later in the evening, TPG’s systems gradually came back online, but not without the persistent question: why did redundancy measures appear to fall short? While the telco has not provided a detailed post-mortem, many suspect that critical authentication, routing, or DNS servers depended on a their main company-owned data centre that lacked automatic failover options to TPG’s other data centres.
As users on Whirlpool and Reddit discussed, a properly designed network can easily survive one location going dark, if there is a sufficiently distributed infrastructure. Optus, Telstra, and TPG have all claimed to invest heavily in redundancy. Yet, single triggers ballooned into nationwide blackouts.
TPG Telecom is currently seeking regulatory approval to sell its fibre and fixed network assets to Vocus Group. The agreement, initially announced in late 2024, remains under review by the Australian Competition and Consumer Commission (ACCC) and the Foreign Investment Review Board (FIRB). Although Monday’s outage is unlikely to directly derail the deal, it has highlighted the critical nature of TPG’s infrastructure. Observers wonder whether Vocus would inherit some of these vulnerabilities or invest in new solutions to prevent repeated catastrophes.
In previous outages, Optus and Telstra both faced public outcry that led to additional industry scrutiny. TPG could face similar external pressure: the ACCC may consider reliability factors when evaluating the broader implications of the sale, although network resilience is typically seen as an operational detail.
By late Monday night, the majority of TPG customers had regained access however, reports suggest pockets of users remained offline well past 10pm. TPG apologised via social media, promising to work “around the clock” until services were fully restored. In many ways, the company’s statements resembled those made by Optus and Telstra in the wake of their own outages.
As the dust settled, a few forum users and industry experts called on Australia’s telcos to treat this as a wake-up call. The three largest telecom providers have all experienced large-scale failures in the last 12 to 24 months; each time, customers question the type of redundancy implemented and the vague and delayed updates to end users. Those old enough to remember the days of dial-up might have found themselves longing for the reliability of a simpler (and slower) era.
For now, TPG subscribers seem to be somewhat back online, though many remain wary. After Monday’s “Meltdown at Glebe,” the spotlight will remain on TPG’s infrastructure as it navigates the final hurdles of its proposed network asset sale to Vocus. In an industry already reeling from prior Optus and Telstra fiascos, Australian consumers are left wondering: is this the new normal, or will the nation’s biggest telcos learn from one another’s failures to ensure Australians can rely on consistent, stable internet?
Some customers are still reporting an outage on Tuesday morning.
TPG and Optus Team Up to Challenge Telstra in the Hunter Region After 3G Shutdown
Written by: Newy Staff
Newy 87.8 FM is an FM radio station established in 2014 targeting Classic Hits music enthusiasts across Newcastle and The Central Coast, Australia. The station plays 60s 70s and 80s music. The station can be streamed online via this website or smart phone apps such as Tunein. In 2024 we opened a local newsroom dedicated to publishing Newcastle News.
© 2024 Newy 87.8 FM | Newcastle NSW Australia