Mark Zuckerberg is said to have started work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, as far back as 2014.
It is set to include a shelter, complete with its own energy and food supplies, though the carpenters and electricians working on the site were banned from talking about it by non-disclosure agreements, according to a report by Wired magazine.
A six-foot wall blocked the project from view of a nearby road.
Asked last year if he was creating a doomsday bunker, the Facebook founder gave a flat no. The underground space spanning some 5,000 square feet is, he explained, just like a little shelter, it's like a basement.
That hasn't stopped the speculation - likewise about his decision to buy 11 properties in the Crescent Park neighbourhood of Palo Alto in California, apparently adding a 7,000 square feet underground space beneath.
Though his building permits refer to basements, according to the New York Times, some of his neighbours call it a bunker. Or a billionaire's bat cave.
Then there is the speculation around other tech leaders, some of whom appear to have been busy buying up chunks of land with underground spaces, ripe for conversion into multi-million pound luxury bunkers.
Reid Hoffman, the co-founder of LinkedIn, has talked about apocalypse insurance. This is something about half of the super-wealthy have, he has previously claimed, with New Zealand a popular destination for homes.
So, could they really be preparing for war, the effects of climate change, or some other catastrophic event the rest of us have yet to know about?
In the last few years, the advancement of artificial intelligence (AI) has only added to that list of potential existential woes. Many are deeply worried at the sheer speed of the progression.
Ilya Sutskever, chief scientist and a co-founder of Open AI, is reported to be one of them.
By mid-2023, the San Francisco-based firm had released ChatGPT - the chatbot now used by hundreds of millions of people across the world - and they were working fast on updates.
But by that summer, Mr Sutskever was becoming increasingly convinced that computer scientists were on the brink of developing artificial general intelligence (AGI) - the point at which machines match human intelligence - according to a book by journalist Karen Hao.
In a meeting, Mr Sutskever suggested to colleagues that they should dig an underground shelter for the company's top scientists before such a powerful technology was released on the world, Ms Hao reports.
We're definitely going to build a bunker before we release AGI, he's widely reported to have said, though it's unclear who he meant by we.
This sheds light on a stark reality: many leading computer scientists and tech leaders, some of whom are working hard to develop a hugely intelligent form of AI, also seem deeply afraid of what it could one day do.
So when exactly - if ever - will AGI arrive? And could it really prove transformational enough to make ordinary people afraid?
Tech leaders have claimed that AGI is imminent. OpenAI boss Sam Altman said in December 2024 that it will come sooner than most people in the world think.
Sir Demis Hassabis, the co-founder of DeepMind, has predicted in the next five to ten years, while Anthropic founder Dario Amodei wrote last year that his preferred term - powerful AI - could be with us as early as 2026.
Others are dubious. They move the goalposts all the time, says Dame Wendy Hall, professor of computer science at Southampton University. It depends who you talk to.
We are on the phone but I can almost hear the eye-roll.
The scientific community says AI technology is amazing, she adds, but it's nowhere near human intelligence.
Ultimately, the fears of a few billionaires raise questions of whether their actions are justified preparations for inevitable disasters or simply indicative of a wider societal anxiety about the future.