A storm on the horizon

Recently an important announcement was made by the FCC in regard to what is known as “net neutrality”. The principal behind net neutrality is simple: no entity on the internet should be treated more or less favorably by internet service providers when it comes to traffic flow. Small companies pushing out 1 mbit/s should have their traffic expedited and treated with the same care as large companies pushing out the same 1 mbit/s traffic flow.

To me this seems like common sense. As a small business owner, we have enough going against us in comparison to the big guys. We have less people on our support staff, we have less people monitoring and keeping our servers running, and we have less staff fixing our software.

Now we have something else to worry about. It seems that the FCC thinks that it is a good idea to pursue a policy that is sure to create even more disadvantage for small businesses who don’t have the money to pay off big telco and cable companies to provide a “fast route” for their traffic. If this becomes a permanent FCC policy change, and our traffic is relegated to slower peering points because we can’t afford to pay the big guys the fees they want for simply passing our traffic through links that are not congested, we’re in big trouble and so is the rest of the fledgling 3d virtual world and VR industry.

VR and virtual worlds are especially sensitive to latency and packet loss. Our services deliver realtime, interactive, 3d environments. We do our best to make sure that we’re sending traffic efficiently, and try to minimize packet flows that don’t need to be sent for state changes and movement. However, if ISPs begin randomly dropping our traffic just because we can’t afford to pay every ISP a special fee, our services will quickly degrade to a point where they’re no longer believable, where the world no longer feels right, and where the immersion is disrupted by jerkily moving, false looking objects and interactions. Drop more of our traffic and we’ll be forced to waste even more money and time debugging problems that are being intentionally created at the network level.

Startups are already strapped for resources. How is this a good idea?

If this is the future of the internet, it will surely kill off many startups in the United States that have a requirement for connections with reasonable latency and low packet loss. If Facebook’s purchase of Oculus Rift is any indication, our breed of interactive services are the future of traffic on the internet. Messing with that traffic might just count all of us in the USA out of the great 3D and interactive tech boom that will most likely first be developed and tested by startup companies. Being an internet startup based in the US could actually become a hindrance for companies wanting to work on the bleeding edge. Is this really good for the country?

In closing, in my opinion the internet doesn’t need to become a place where only the guys with the most money are able to pass traffic around with reasonable guarantees. I have a hard time believing that these huge telcos with 6 billion+ dollar profits are hurting by having to update their infrastructures without additional income from every company that wants to pass traffic through their networks. There has to be a happy medium here.

Virtual world thoughts

Virtual world thoughts

I just got home this afternoon from a show in Toronto. It was a leg on Armin Van Buuren’s “Intense” tour. The display of music combined with stunning visuals and effects really reminded me of the power of virtual worlds and all the wonder they can bring. Armin’s shows always sell to capacity and just about everyone that attended stayed for the entire 4+ hours on their feet dancing the night away, just being present with other people. The environment is absolutely energetic and uplifting. Virtual worlds especially when combined with the up and coming generations of VR tech can bring these types of experiences to more people than ever before. So I got to wondering, what does the future look like, and how can be promote a VR future that can bring life changing social experiences into the home?

Listening to Philip Rosedale’s VWBPE keynote I hear a lot I agree with on the technology side of things. Latency sucks, and if you’re trying to create the illusion of actually being in the room with a group of people, it can’t take seconds for their avatar to respond to real life actions. Expanding on this even more, you also lose important bits of immersion anytime something happens where people have to wait a long time for something, or are forced to watch a loading screen. A study done by Akamai 8 years ago showed that 28% of shoppers would leave a retail web site that took longer than 4 seconds to load [1]. In 2009 a similar study was run on behalf of Akamai, and found that forty-seven percent of visitors expected to wait no more than two seconds for a web page to load[2]. A 3d scene is far more complex than a web page, but if we’re going to try to attract mainstream attention, people aren’t going to be any more forgiving. A user should be able to enter and begin streaming the scene right away without having to wait 40 seconds or more for it to be processed.  Latency and long delays must be mitigated.

Spatial relations are important. I believe a seamless interconnected virtual universe is key to a true immersive experience. I love the InWorldz mainland and ocean connections as well as the various large masses of private regions grouped together. You can travel to a huge landmass by walking, or flying, or via land, sea, and air vehicles and only hit a wall after crossing straight through dozens of regions. This allows people to come together and form large communities of interconnected continents that can become part of a much larger story than any one person could possibly dream up. For believable immersion, I believe you must have a choice between being “linked” to your friend’s virtual land, and walking there by having him set up close to you. I fell in love with virtual worlds because I could walk a huge continent and explore. This is so important to me that the future loading and sharding strategies I have dreamt up take care of this in their design. All I need is the time and resources to get them developed and showcased.

3d virtual worlds should not be billed as a replacement for people’s real lives. This is counterproductive to their uptake and gives a false light on what many of us really hope to accomplish, which is augmentation, not replacement. For example, I believe InShape will augment the exercise routines of its users. It is not intended to replace their routines, but rather to make them more fun. It is a shame that the name of one of the most popular virtual worlds seemed to imply life replacement rather than life enhancement. The name issue alone may have resulted in a lower number of people interested in virtual worlds that may have been otherwise. Virtual worlds allow you to connect with your friends all over the world when the distance from them, or other circumstances in your real life may make that impossible.

Openness has been an important topic as well. I have been reading way too much confusion between the openness of the protocols behind internet and web technology and universal identity. In the OpenSim landscape, any grid that does not currently implement hypergrid is considered a “walled garden grid”, and negative parallels are drawn between this and AOL before the explosion of the world wide web. This is creating a lot of tension and tribalism in a space that right now simply needs to grow and add users before it can successfully support arguments of this philosophical magnitude without fragmenting into an unworkable mess.

The first problem with comparing hypergrid to the world wide web is that hyperlinks on the web do not necessarily get you access to the content of web sites. In fact, many of the most popular sites on the web including facebook, twitter, and others require you to sign up in order to access and interact with the vast amount of content they contain. They are essentially walled gardens connected to each other by hyperlinks where eventually to get where you want to go you will be asked to log in. This hasn’t prevented the web from achieving explosive growth, nor has it stifled innovation. People still have a clear choice of what sites they want to sign up for and they don’t have to continuously log into their favorites because cookies are used to remember who they are for a while. They just click and go. The parallel to this would be not having to close the viewer to log into a new grid. You would just pick the change grid option, or type its login URI, and go.

The second problem with comparing the hypergrid to the standards of the web is that hypergrid is at its core a universal identity service akin to OpenID. It lets you sign up one place and log into any others that support the protocol. The web as it stands today lacks a ubiquitous universal ID system so there is really no comparison. Even if it had one, virtual worlds contain resources that do not map to anything comparable on the web. Carrying your inventory with you seamlessly between grids would be like automatically carrying your linkedin contacts with you when you log into facebook, or carrying your youtube videos with you to vimeo. The closest thing we have to a universal identity service is dominated by the facebook “walled garden” and it’s called OAuth.

Ebbe Altberg said something similar at VWBPE2014:

So that is where I sort-of lean in terms of priorities: make it much, much easier to use in advance of focusing on the metaverse or interconnection. Even the web today is not really interconnected; you don’t take your identity from one place to the other, and the Internet works quite well without that, and so it’s not a top priority for me. It’s not that I disagree with it, it’s not that I don’t like it, it’s just a matter of priority for me.

Finally, the comparison of a grid not implementing hypergrid to the old AOL is laughable, and I think at times designed to be intentionally inflammatory. The biggest killer of the AOL walled garden was that they charged a monthly fee for users to access their content. When news, search, and other content that AOL offered started becoming easy to access from outside of the AOL client, AOL lost its grip on subscribers that now had access to free content and no longer needed the pay for organized content services that AOL offered. The vast majority of opensim grids are free to sign up and explore for as long as you want, no payment necessary, so the “AOL” parallel is completely invalid. Having to log into separate grids is not much different than the way we have to log into successful websites today, except we don’t have to close the browser.

I think there are far too many people out there willing to make wild predictions with questionable parallels to past events than is healthy for the virtual reality community. Hypergrid is awesome connectivity tech, but using or not using it doesn’t say anything at all definitive about a grid or its potential. Concentrate on building the future and it will go where it is going to go all by itself. It’s a matter of priority. Let’s get the tech right!

Think before you act (or code)

Think before you act (or code)

When you have a good idea, there is a huge temptation to just jump right into it and get started. When the project you’re working on is tiny and guaranteed never to grow beyond that, jumping right in may even be the most effective way to get the task done. But when you have a large project to accomplish, jumping right in can be a total disaster.

Let’s say one day you decided you wanted to try to build a house. You get the house plan in your head, call up a bunch of friends, and run to your local home improvement store to get lumber, nails, shingles, windows, doors… You buy all the things that you think you’ll need to build the house and bring them to an empty plot of land with your friends to begin.

You start building. You’re not really sure exactly the layout that you decided on for the home, so as you’re framing things in, you keep having to tear down,cut, and rebuild parts of walls that are too long or too short. There was no standard wall thickness decided on so there are variations between the walls that your friends have built and your own, and because of this when it comes to nail everything into place, nothing fits together correctly. After the walls are finally put together in a hodgepodge, and nothing is square, it turns out that the drain pipes for the toilet plumbing in the bathroom are actually now in the center of the bedroom, and it will be impossible to continue building the house because the frame is too weak to support the roof.

Designing and building large scale software is similar to architecting and building a house. This was recognized by a few very smart people who discovered many useful recurring patterns they kept seeing in software and decided to document them in the book called Design Patterns: Elements of Reusable Object-Oriented Software also know as the “Gang of four” book or GOF book. Not only are there reusable patterns present in software development just like building physical structures, but to create a successful end result with fewer bugs and architectural problems you must also do a lot of work before you even begin to code.

One way or another, you need to have the design of the software down in your head and then written down and tweaked/adjusted in some medium more concrete than just your mind. For each new large project I take on, I will usually write a document describing how I see the internals of the software working together and how the design goes about solving the problem I am tasked to solve. I’ll document data structures, database layouts, and core software component interactions.

After the initial documentation of the big picture, I’ll move on to UML structure diagrams to firm up the actual classes and behavior. I’ll take this time to really think about how program should be laid out internally and make decisions about inheritance vs composition, design patterns I may want to employ, and make changes that I think will lead to the cleanest most understandable output of code when it comes time to actually start typing.

After the UML static structure, I may take some time to create UML sequence diagrams for the most complex object interactions to make sure that the design is sound and that I’m not missing anything subtle about how the calls and messages between objects will have to flow. During this time I’ll also keep tweaking the UML static structure diagram. The UML document should always stay fluid and be allowed to change throughout the development cycle.

So what is the point of all this? How is this an advantage over just figuring all of this stuff out in your head and making these determinations while you code?

It’s simple. Code is expensive and rigid. Once you have a lot of it down it is hard to change it. Changing a major system on a UML diagram involves deleting a few objects off the diagram, rearranging some others and add a few new ones. Doing this in code involves a cascading series of changes that will affect many of the classes involved in the program and a lot of time sunk into doing this multiple times throughout the initial design of the software. It is much easier to change things at a high level (thinking and documenting a design) than a low level (creating the software).

When you think ahead, by the time you get to the actual coding part of your work, you’ll already know exactly how the pieces should fit together and will have worked through most of the actual problems of making everything mesh. Coding will turn into simply describing to the computer in its own language how you need it to solve your problem. You’ll be explaining your solution to the compiler instead of trying to come up with the solution and explaining it at the same time. You’ll be happier and more efficient. You’ll get more sleep, and you might even get a bit more time with your family and friends.

Asynchrony and C#

I am currently rewriting connection handling and related code for InWorldz, and I’m changing most of it to be async because many of the operations that have to complete end up waiting for network I/O. I’ve written a few of our other services in C++ following async programming patterns using Boost.Asio, and although following program flow and making sure all error and success cases are handled properly can be a challenge, the performance benefits of using an async flow when coupled with operating system supported async primitives can not be denied. Once you get used to the flow of it, async programming can be a real asset to have in your toolbox, and in some scenarios actually simplify your code.

I’ve always felt that the model of using what are essentially function callbacks for every async call was expensive in terms of the amount of code you end up having to write. This can get a bit brain numbing especially in C++ because you have to fill the functions into both the header files of a class as well as the .cpp implementation file. With the advent of C++11 and lambda functions, the extra code could definitely be minimized, but there is still some elegance missing that I just couldn’t put my finger on. Then I found .NET 4.0’s await and async keywords.

Let’s take a look at a simple (but contrived) example using boost ASIO, and then another example using C#, .NET 4, and await/async.

Badly written C++ example

void WorkerThingy::doPart1(boost::function resultCallback)
{
  boost::asio::async_write(client->getConn(),
    boost::asio::buffer(*data1, data1->size()),
    boost::bind(&WorkerThingy::doPart2, shared_from_this(),
      boost::asio::placeholders::error, resultCallback));
}

void WorkerThingy::doPart2(const boost::system::error_code& part1error, 
  boost::function resultCallback)
{
  boost::asio::async_write(client->getConn(),
    boost::asio::buffer(*data2, data2->size()),
    boost::bind(&WorkerThingy::doPart3, shared_from_this(),
      boost::asio::placeholders::error, resultCallback));
}

void WorkerThingy::doPart3(const boost::system::error_code& part2error,
  boost::function resultCallback)
{
  boost::asio::async_write(client->getConn(),
    boost::asio::buffer(*data3, data3->size()),
    boost::bind(&WorkerThingy::itsDone, shared_from_this(),
      boost::asio::placeholders::error, resultCallback));
}

void WorkerThingy::itsDone(const boost::system::error_code& part3error, 
  boost::function resultCallback)
{
	resultCallback(!part3error)
}

bool WorkerThingy::doIt()
{
	doPart1(boost::bind(&this->onDoItResult, this));
	
	boost::unique_lock lock(this->mut);
	while(!this->data_ready)
	{
		this->cond.wait(lock);
	}

	return this->success;
}

void WorkerThingy::onDoItResult(bool success)
{
	//...process the result we got
	boost::lock_guard lock(this->mut);
	this->data_ready=true;
	this->success = success;
	this->cond.notify_one();
}

In this really terrible example that probably doesn’t compile, doIt() is called from a synchronous code path, and has 3 async parts that it will call one by one to get its network tasks finished. DoIt() calls doPart1(). When the send of the data from doPart1() has completed doPart2() is called (with no error checking, but that’s another story). Then on the completion of doPart2(), doPart3() is called by ASIO. Once the data from doPart3() is sent, ASIO calls itsDone() which calls the passed in resultCallback, which in turn was bound to our onDoItResult() method. OnDoItResult() signals the condition and frees up the caller of doIt() and finally returns whether the send in doPart3() succeeded.

The wait primitives and long call chain gets confusing after a while, and sometimes when we’re doing real work there are many more calls chained in to complete a task.

Let’s see what this might look like in C# and .NET 4 using await and async.

Badly written, but more readable C# example

namespace AwaitSample
{
    class WorkerThingy
    {
        private NetworkStream m_stream;

        private byte[] data1;
        private byte[] data2;
        private byte[] data3;

        private async Task DoItInternal()
        {
            await m_stream.WriteAsync(data1, 0, data1.Length);
            await m_stream.WriteAsync(data2, 0, data2.Length);
            await m_stream.WriteAsync(data3, 0, data3.Length);

            return true;
        }

        public bool DoIt()
        {
            var task = DoItInternal();
            task.Wait();
            return task.Result;
        }

    }
}

When the synchronous caller executes DoIt(), the function calls DoItInternal(). This calls the first WriteAsync() method, and then returns to the caller immediately while it is running. The caller ( DoIt() ) is at this point waiting for the task to complete by calling task.Wait(). After the WriteAsync(data1) call completes, the task moves to the WriteAsync(data2) and DoItInternal() is suspended until that call finishes. Finally, WriteAsync(data3) is executed, and when it returns, we finally return from the task which unblocks task.Wait() and gets us the result.

As you can see this is super useful and really cleans up the call chain for async execution. We also don’t have to await each of the WriteAsync tasks if we would like to have them executed in parallel, and this is very easily done without adding a bunch of functions and trying to coordinate everything.