What set apart this project from all the failed digital city attempts was an integrative approach that combined cutting edge technology with standardized production and sustainable business model.
Application of this system to real-life urban management was an important global milestone towards new age of convergence of physical and virtual worlds into a single system of information.
In a nutshell, 5D Digital City is a technology that features a real-time virtual environment comprising 3D models that are accurate and detailed representation of the actual real cities. Additionally, the model is data-driven – each 3D element is linked to a database containing all sorts of real-world information.
This data-driven virtual models, depending on level of access, are be used by governments for urban management — to perform accurate studies and simulations; by private investors – also to virtually test the impact and performance of future developments in a real-life scenarios, as well as by general public – to perform locally based searches for products, services or general information (think GoogleEarth).
And 5D means: 3D + DATA + TIME
5D City is a platform of proprietary software applications that:
- Create accurate digital 5D models of entire cities, either automatically from existing GIS data, or to facilitate their manual production.
- Populate the model with data by linking it to existing databases and/or facilitate future data integrations.
- Optimize the model and data and securely store them to local or remote servers.
- Provide immersive experience by displaying the models of entire cities in real-time by means of dynamically streaming the relevant information from the server.
Build user-friendly interface and tools to browse and manage the model, and perform real-time simulations… which is the whole purpose of doing the project – to simplify working with information for an average user.
In order to create useful 3D content to the system, it was critical that the new set of production standards and procedures was established, to which all the models would have to comply – both created automatically and manually. These standards defined in great detail how the models should be created, organized and submitted to the system, and that allowed for a number of advantages:
- incredible scalability because any 3rd party modeler could also undertake effort and produce instantly usable content.
- automatized quality assurance and testing
- development of modeling tools to automatize and facilitates parts of the process, or entire process in some cases
- ensured consistency from the macro level (digital models of infrastructure) to the micro level (models of the details on individual houses, or interior spaces)
Vision and Mission
Screampoint was founded and thrived on a common vision that the way we use technology to access information will change, and we were commited to make that change happen.
The critical problem we identified was the fact that there was an increasingly enormous amount of information in virtual space (like your typical Google search) that wasn’t properly connected to the objects and spaces in the physical world that we live in, and therefore was not being used to its full potential.
We intended to bridge that gap by creating an accurate virtual representation of the physical world, which would be far more manageable and interconnected than the real one, and map all the relevant information about the real-world objects to their virtual counterparts.
Within this virtual world we would be able to fully harness the power of information to get a better understanding of the world we live in, and later as the hardware technology catches up we would be able to easily transfer already structured information from virtual objects into the real, physical ones.
One of the critical keys to success of the system was that it was financially viable. Typically similar attempts in the past failed because of the high initial investment and maintenance costs.
We have developed a sustainable and profitable financial model whereby the funding and on-going maintenance of the system is supported not only from government (as it was typically the case), but after only a few years is primarily provided by private developers.
My primary role in the project was to channel the company vision into a product: to help define what it should do, how it should be made, and how it should evolve in the future. I was involved in all stages of the process, from conceptual and organizational to hands-down coding, and including work on both software design and production standards.
On a software side, I helped outline the software architecture of the platform, and then performed the business analysis in order to figure out the individual functions that the product should provide to the users. Based on these requirements, I hand-coded a system of prototypes that in detail defined user interaction with the system.
Large part of my assignment with the project was to create system of Modeling Standards and Procedures to optimize the content production workflow. In this regard, I produced a range of documents that defined in great detail how to create, organize and submit models to the system. Additionally, I created a range of modeling tools that facilitated the parts of the process.
In a way, I was a bridge between different parties involved: between software engineers and computer graphic artists, and between our team and end users of the system.
Size of the 5D Wuhan real-time model:
- 8459 square kilometers of terrain and infrastructure
- Roughly a million of buildings represented in 3D
Hsiao-Lai Mei, President and CEO
Kim O’Brien Chalmers, VP Production
Nenad Katic, Director of Technology
Laraine Wang, General Manager, Greater China
Sinisa Spasojevic, Software Architect
Marc Maleh, Director
Romain Marruchi, Chief Engine Developer
Lingfei Song, Project Manager
Brad Avery, Associate Director
Note: I am no longer part of the development team, and therefore this article doesn’t contain up to date information about latest system version, but is referring to Beta V1.0 version of the system as deployed and maintained until 2008.