Building a virtual world is not an easy task, and the primary challenge is hardware. In a virtual reality environment, in order to make the sensations and feedback as close to reality as possible, the amount of data required to construct the entire virtual reality becomes enormous.
Leaving aside the question of how to obtain such a massive amount of data, just finding a storage server capable of accommodating this vast amount of data is already a difficult challenge for those with limited financial resources.
However, this is not a problem for Mark. After his company began to generate profits with the release of its first product, Mark immediately upgraded various devices within his own company.
The server cluster used in Mark's research center is not a traditional server array. It is an entire cloud platform built on the universal material graphene. This cloud platform, constructed using billions of high-performance graphene nano-nodes, allows for logical partitioning and invocation of servers in the data center as well as load balancing of computational tasks. The use of superconducting graphene material greatly enhances the server's computing speed and storage capacity. The current performance can be said to be very close to the envisioned quantum computers by scientists.
Moreover, the scenes required for the virtual laboratory Mark wants to build are not overly complex. The main data needed is the specific data of the experimental materials in reality, which is called upon during experiments to ensure the accuracy of the virtual experiments. Therefore, even if the server performance is not optimal, it can still meet Mark's current needs. This is why Mark decided to call upon one server from the server cluster.
Mark has the data volume under control, and the hardware issue is not a concern. The remaining and final challenge, as well as the most difficult one, is the issue of data sources. Relying solely on coding on the computer, even if one has great skills to recreate the appearance and color of real objects in the virtual world, it is still impossible to replicate the tactile sensation, texture, weight, smell, and temperature of those objects through simple coding. Thus, the source of this data becomes the most challenging issue in building a virtual world.
However, although rigid code and programs cannot accurately reproduce the sensory experiences of the real world in a virtual environment, they can quantify the collected data and deduce a series of other sensory data.
For example, Mark can upload all the memories he has experienced from childhood to adulthood to a database through a brain-machine interface. Then, through program analysis and classification, various sensations and emotions associated with the objects Mark has encountered can be quantified.
The reason why data needs to be quantified is that even if Mark has encountered all the materials in the world, the sensations they bring, such as weight, are completely different when they vary due to factors such as volume. However, if the data is quantified, when Mark lifts a 10-kilogram heavy object in the virtual world, the system can call upon the quantified data of Mark lifting a 1-kilogram object, multiply it by 10, and provide feedback to the brain, thereby achieving accurate sensory perception. This applies to all other senses as well.
For example, the taste of rice wine is composed of the flavors of rice, alcohol, and the unique aroma of yeast after fermentation. If the memory uploaded to the database by Mark does not contain data about rice wine but has data about rice, alcohol, and yeast, then the system only needs to follow the program and combine the quantified data of the respective flavors based on the appropriate proportions, and it can reproduce the taste of rice wine in the virtual world.
Moreover, Mark's idea is completely feasible. Although the human brain has short-term and long-term memory distinctions, people can only recall a small portion of all the things they have experienced in their lifetime.
However, the inability to recall does not mean that the memories cease to exist. In fact, as long as the data is stored in the brain, unless there is brain damage or impairment, these memories will never be lost.
Our brain cells do not renew themselves like other cells in the body. From the moment we are born until we die, there is no replacement of brain cells; they only undergo self-repair, while the memories stored within them are constantly transferred among neurons. One can say that our brain is like a hard drive. Every memory that has existed leaves a trace, and the reason why we cannot recall certain memories is that their file locations have shifted, making it impossible for us to access them.
However, using brain-machine interface technology to duplicate memories eliminates this problem because the machine does not care about the content; it only knows that if it exists, it should be copied. As for the task of classification and organization, that can be left to other machines or programs.
And this is exactly what Mark has to do now. He muses, "How did a project that seemed so simple before suddenly become more and more complex? Now I have to code a program for quantifying sensory data. Sigh! I've come this far; if I give up now, it would be a waste of previous efforts..."