Oaxaca

On January 23rd this year, the CoLTE team, this time consisting of myself (Spencer Sevilla), Pat Kosakanchit, and Esther Jang, headed down to Oaxaca, Mexico, to collaborate with the amazing folks at Rhizomatica and Telecomunicaciones Indígenas Comunitarias (TIC). While we were there, we wrestled with equipment, erected a tower, and eventually launched a second pilot network. After some minor stabilization efforts and a sprint to submit a formal academic paper, I’m finally getting around to this write-up of our time down there.

Team and Context:

Led by Peter Bloom, Rhizomatica and TIC have been working in and around Oaxaca for the last eight years, where they provide 2G (voice and text) cellular service to the indigenous communities in the surrounding area. Both organizations are widely considered to be world leaders in the rural connectivity space, and it shows. Everything from their load-out checklists and choice of tools and vehicles to their innate knowledge of when they can wing it and when they can’t, demonstrates the honed expertise of people who know their craft inside and out.

There are many things I like about my job – but my favorite thing, by far, is that it provides me a unique opportunity to see some different corners of the world. Though two weeks does not an expert make, I found Oaxaca to be a particularly beautiful corner. The city of Oaxaca de Juarez is nestled in the highlands of Southern Mexico, at an elevation of approximately 5,000 feet (1,600 meters). Though we were there in late January, the weather was pleasantly hot and dry, with temperatures reaching the mid-eighties (Farenheight) and beautiful, fiery sunsets every evening. Oaxaca sprawls out to fill a valley (the Valle Central), and is bordered by two mountain ranges, the Sierra Madre and Sierra Azul. Oaxaca is known for an active youth scene, and the city certainly feels alive with a vibrant art and food scene, set against an anachronistic backdrop of colonial architecture reclaimed and repurposed by a modern Indigenous rights movement into an empowered, proud, self-actualized culture.

eNode Blues:

One of the biggest takeaways from our work in Papua was that safely transporting eNodeBs is a tremendous pain in the ass! Given that we typically just order commercial eNodeBs from international manufacturers, we concluded that it makes more sense to just have the eNodeBs shipped directly to the deployment location, when possible. This time around, we configured our EPCs in the lab, hand-carried them on the plane flight down, and met up with our friend Peter, who had already received the eNodeBs and was holding them in the office. The initial plan was to install three separate sites and have a bake-off: one site with the exact same BaiCells gear we deployed in Papua, and the other two sites with some new gear from a small Finnish group called Kuha.

While this was a good idea in theory… in practice, it turned out to create some serious complications, and almost derailed the entire trip. The BaiCells eNodeB, which we expected to be the exact same model that we deployed in Papua, turned out to be a slightly-newer generation of this model. This particular generation shipped with a couple of unfamiliar bugs, as well as a redesigned configuration interface that led an unnamed teammate* to accidentally lock the team out of the unit for several days. Much credit and many thanks go to BaiCells’ North American support team: not only did they help us get back into the unit, they even gave me a personal cell phone number to call once they understood that we were deploying their gear on a limited time schedule. From Papua to Oaxaca, I continue to have a very high impression of the BaiCells team, and have nothing but good things to say about both their products and support.

*it was Spencer.

Unfortunately… our team had a less positive experience with the Kuha gear. We lost a lot of time in Oaxaca fighting through one configuration challenge after another, usually with somewhat unintuitive or unhelpful symptoms. Though the Kuha team was friendly and helpful over email, their lack of a North American support center meant that all support work was done from 9 to 5 Helsinki time, forcing me to stay up late and wake up early to try and get one more round of emails before their team would go home for the evening. We eventually got all the issues worked out and finally got their eNodeB to connect to our EPC… two days after we got back home. Deployment of these two radios has yet to be arranged 😦

Network Complications:

A major difference from the situation in Bokondini, one that I hadn’t really thought about and probably could have considered, was that of EPC access. In Bokondini, our backhaul to Wamena was provided by Airwaves Missions, who were thoughtful enough to setup a forwarded port for us, so that we could SSH into the EPC and access it from back home in Seattle. Given how often the EPC crashed, especially in the early days, this was a pretty vital requirement.

In Oaxaca, this was not the case at all: Internet backhaul to the sites is provided by a third-party ISP that is, apparently, hard to get ahold of and doesn’t respond to any requests of this nature. This issue dawned on us two days before our deployment date, at which point it became an immediate and big problem – since it meant that after we deployed the EPC in the field, we’d literally have no way to access it, check on it, or fix any issues that may arise!

Conceptually, the solution to this problem is simple: setup a Virtual Private Network (VPN), host it on a server that’s reachable over the public Internet, and configure the EPC to connect to this VPN every time it boots-up. As long as the EPC is connected to the VPN, any other computer (like my laptop) should be able to join the VPN and then access the EPC. Added bonus, this solution is far more secure than the port-forwarding setup in Bokondini, and also more robust in terms of accessible services. Sounds great, right?

Wrong! Though theoretically simple, in practice this situation was all sorts of screwed-up, and took a surprising amount of pain and ugly code to un-stick. As it turns out, most (all?) VPN clients are somewhat failure-prone, and seem to be built with the assumption that they’re run on a machine being operated by a user. In this context, VPN sessions are naturally short-lived (I log on, I work, I log off) and the consequences of failure are minimal (the VPN disconnects, I manually reconnect it). In our context, these assumptions are wildly off-base, and incredibly problematic: we expect the EPC to be online pretty much indefinitely, and a VPN disconnection is essentially a fatal problem, given that there’s no way for us to tell the EPC to do anything if the VPN fails. To make matters worse, the VPN often failed silently, without any obvious hooks for us to programmatically restart it.

After quickly buying a license for OpenVPN and hosting it on EC2, the team spent a large chunk of time configuring and reconfiguring the OpenVPN client, begging it to stay online for more than seven hours at a time. Lacking any success in convincing commercial software to just do its job, we eventually wrote some cron scripts to (1) restart the VPN every four hours and (2) hard-reboot the EPC every 24 hours, just in case. This solution was remarkably effective, and got us to a workable state… just in time to deploy our code in the field.

An Eventual Deployment:

truck.jpeg
Driving with a tower strapped to the roof

On Day 13 of 14, racing the clock down to the wire, we finally headed out and got a site up! Early in the morning, Esther, myself, and a TIC crew led by Javi loaded up a truck, strapped a tower to the top, and headed out to a small mountain village approximately three hours northwest of Oaxaca. After arriving, we met the town leadership, chatted a bit about what we were building and what it would mean for the community, and then got down to it.

WhatsApp Image 2019-02-07 at 16.33.04.jpeg
Enjoying a laugh during assembly

We spent the rest of the day working under a hot sun – hauling the gear (and the tower) up three flights of stairs, drilling anchor holes in the floor and walls, mounting the radio equipment on the tower, hoisting and securing everything with steel cable guylines, and running power and network cables through the building. After eight hours of this, we finally got everything setup and running just before sundown.

WhatsApp Image 2019-02-07 at 16.34.17.jpeg
Mounting the eNodeBs and hooking them up

Our test-phone connected to the network and got on the Internet, we high-fived all around, packed up the truck, grabbed some well-earned dinner cooked over an open flame, and headed back home, eventually falling into bed slightly after midnight. The next day, that was that: we woke up late, said our goodbyes, packed our bags, and hit the airport.

My Biggest Takeaways:

I’m incredibly proud of the effort our team put in to make things work, and our final success owes more to our persistence and drive than it does any particular innate talent. Despite the repeated curveballs of software and network configuration, and working on an increasingly limited time-schedule, our team relentlessly ground through issues until we found ourselves over the finish line. Both Pat and Esther shone under pressure, diving in and setting up VPNs and IPSec tunnels with minimal background knowledge or prior experience. Watching Pat learn how VPNs work over breakfast, and then having one setup and working by lunch, was a particular stand-out memory 🙂

As the person leading the deployment, well… they say good judgement comes from experience, and experience comes from bad judgement. While some of the curveballs were out of our control, I could have anticipated many of them had I thought through the deployment a bit more. A wide range of naive assumptions on my end could have been cleared up by asking more pointed and explicit questions of our partners, and this knowledge would have enabled me to front-load a large part of our fieldwork in Seattle and use our time in Oaxaca more effectively. On the VPN front, I’m starting some work to stress-test and compare VPNs back here in the lab, with the intent of having a truly bombproof solution built and ready to go for our next deployment. On the eNodeB front, I’ve realized that many hardware companies now use an agile/rolling release cycle, and we cannot/should not expect uniformity between versions. The best solution we’ve come up with for future deployments is every time we order a shipment of eNodeBs to a site, we should have one of them sent to our team in Seattle, so that we can simply catch any surprises ahead of time. This will mean we’ll have to hand-carry a single eNodeB down, which shouldn’t be too much of a problem if the other ones are already on-site.

On the bright side, all’s well that ends well. The network is up and operational, and the TIC team has already started distributing SIM cards for an initial trial. Moving forward, I’m incredibly curious to see how CoLTE impacts the community (initial reports have been good!) and I’m very pleased to get our second site up and operational. Longer-term, it was fantastic to collaborate with the Oaxaca team, and I’m sincerely looking forward to building this relationship and teaming up on more installations in the future. As always, thanks for reading.

 

 

Pat Kosakanchit Won a CRA Honorable Mention!

One of our outstanding undergraduates, Pat Kosakanchit, earned a Honorable Mention from the Computing Research Association’s Outstanding Undergraduate Researcher Awards for her work on CoLTE. Well done, Pat!

https://news.cs.washington.edu/2019/03/08/allen-school-undergraduates-recognized-for-excellence-in-research/

https://cra.org/about/awards/outstanding-undergraduate-researcher-award/#2019

What Can You Do With 1 Mbps?

The Internet connection in Bokondini is a 1 Mbps satellite connection with a 10:1 contention ratio, which means that actual speeds can get as low as 100 Kbps. We haven’t been to Bok since we turned on the network, but when we were there, we had pretty much unlimited access for the three of us, with a request to use it lightly during school-hours. Even though the connection was a wee bit slower than our 100 Mbps line back home, I grew to really, really appreciate it, and am half inclined to think it’s the perfect amount of Internet. When I talk about our work in Bokondini I often get the question of “what can you do with 1 Mbps today anyways?” so I figured I’d write a post explicitly talking about some of our network experiences.

Older Protocols Work Better

First and foremost, older Internet protocols were inherently designed for slower speeds and less-reliable network conditions – so it’s no surprise they tend to work better. Somewhat interestingly, this split maps almost exactly to whether the protocol is designed to be run in the browser, or exists as a standalone application. Load up OSX’s “Mail” app that uses SMTP? No problem at all. Try to load Gmail in my browser? Not a chance!

On this note, a surprising culprit here was Slack. I naively assumed that the Slack app on my computer used XMPP, or any one of those classic standardized chat applications – but I was dead wrong. Turns out, Slack is Web-based, not that resilient, and surprisingly network-intensive for just a chat application. After realizing that I was literally unable to connect to Slack, I switched over all my correspondence to email, which went much, much smoother.

Web Interactivity Is Non-Existent

Our position behind a satellite link means that every round-trip takes a LOT longer than normal. This is a major killer for the new trend in modern (web 2.0? 3.0?) web-applications to (1) quickly load a shell of a page and (2) dynamically request the rest of the page as you interact with it. The biggest offenders of this practice that I noticed are articles that only load the next paragraph when you scroll down, rather than just fetching the whole page (all 20 kb of text, really?) and letting your browser handle the rendering.

Text-Only Webpages Are Awesome!

On the opposite end of the spectrum, we discovered certain news organizations provide stripped-down and text-only versions of their sites! If you’re curious, check out https://text.npr.org and https://lite.cnn.com for some great examples. It made me immensely happy to see content providers showing awareness of constrained-bandwidth scenarios, and is also worth mentioning that (to my understanding) these types of websites are much more friendly and compatible with accessibility tools (such as text-to-speech readers). We had to kinda “guess around” and use google to find some of these sites, but I’d love to see this practice become more widespread and/or standardized. It brings to mind how mobile website encoding started with the “m.” subdomain (still used by Wikipedia and pretty much no one else?) and then evolving into the responsive designs in use today… it would be fascinating to see backhaul-constrained access go the same way, maybe loading bare-bones content first and keying the responsivity off of network performance metrics.

CDNs Matter A Lot

Almost all satellite networks today are “bent-pipes,” meaning that the satellite doesn’t do any routing/switching and simply re-transmits our signal down to a predetermined ground station. In our case, that station was located in Australia, which means that websites with a CDN presence in Australia performed much, much better than US-based websites. Special shout-out to YouTube: I seriously didn’t expect video-streaming to work at all, but our YouTube tests came out remarkably usable, with relatively minimal buffering.

Interruptions are Interrupted

My favorite thing, by far, about working in Bokondini was that it felt like the Internet gave me everything I needed, but nothing I didn’t. The majority of my online work-tools (i.e. email and git) worked okay, if a little bit slower than usual, and my local work tools (text and code editors) were obviously fine. Even Google Docs worked great – I’d set it to offline mode, type my heart out, and then just get online to sync the document every so often.

Most general-use Web pages loaded fast enough to not be obnoxious, but slow enough to require my intent and sustained interest (relevant XKCD). Say I was writing an article and needed to lookup a statistic? Go to Google, search for it, wait a little bit, open the website, wait a little bit, not a problem at all. Unconsciously open a tab and surf over to Facebook to kill time? It loaded slowly enough to allow me some time to question/reflect on what I’m doing (really just wasting time), and I’d often end up closing the tab before it loaded. Turns out, the Internet’s a whole lot less of a distraction-machine when you make the instant gratification even slightly less instant!

1 Mbps For All?

Overall, I was remarkably impressed by how useful 1 Mbps Internet was, even in the moden Web ecosystem. Lately, I’ve been considering conducting another 1 Mbps experiment here in Seattle, to see if my user behavior comes out the same, or if the experience is changed by being geographically closer to common websites (or just being located in a highly connected city). Especially if it ends up helping me ignore my typical time-wasting sites, maybe it’ll turn out that more is less, less is more, and I end up preferring 1 Mbps to my 100!

Taking this idea even further, my colleague and friend Steve Song wrote a great post a while ago about the hypothetical economics of just providing free 2G cellular services (voice and SMS) to everyone. It’s pretty hard to top the social utility provided by voice and text services, but as more and more voice traffic turns into VoIP, I’ve started thinking and wondering about the Internet equivalent. Maybe ”1 Mbps for all” is really what we want to shoot for…

Bokondini Initial Roll-Out

All right! Where I left off last time, we had just built the network, turned it off, and headed back to the States (just in time for another round of pitches and conferences). Once back in Seattle, we expected a relatively quick roll-out, but ended up having to pump the brakes while some regulatory questions got sorted out (and David returned to Papua from the States).

To my understanding, the terms of our license require that we not compete with any existing commercial telecom operator. In Bokondini, there exists a single operator, Telkomsel, who offers 2G coverage only. While the law clearly stated that we couldn’t compete with Telkomsel, “compete” was less clearly defined. Are we another cellular network? Yes. Do we offer 2G coverage? No. Do we offer telecommunications services (i.e. voice or text)? No. Could people use our network for telecommunications services? Via WhatsApp, absolutely. This set of questions ended up being our initial foray into the much larger existential question of “are we a telco or an ISP?” Or, more succinctly, “what the hell type of service do we provide?”

Initial Network Roll-Out:

On October 18, David sent us a WhatsApp message telling us that he had turned the network on in Bokondini and that everything looked good! Turns out, the system was already working as intended: he had traveled to Bokondini, powered everything on, inserted a SIM card, used the network to video-chat with his father in Florida for a hour, and then thought to give us a message. Two days later, we received word that he had distributed 10 SIM cards into the network and sold the first of many data packages to Fadly for resale. Without even realizing it, we were now a live business generating revenue!

Bugs, Bugs, and More Bugs:

I wish I could tell you the story ended there, but I’d be lying. As soon as we added our first ten users, the system started crashing, sometimes as often as every thirty seconds! This was, as you might expect, not the intended system behavior. After a week of frantic work, I was able to pin the issue down to a specific user’s phone, and stabilized the network by sending word to Bokondini kindly asking this user to please keep their phone off until further notice. Another two weeks of analyzing log files eventually revealed the culprit: a single incorrectly parsed header field, deep in our code, that was crashing our system every time the phone tried to join the network. A one-line fix and we were back in business… until we added another ten users and everything started crashing again! We stayed in this holding pattern for approximately two to three months: add ten users, brace for more issues, frantically fix them, take a day off, add ten more users. A grueling process, to be sure, but a necessary step on our march towards a stable LTE platform.

Interestingly, the majority of the bugs we flushed out had to do primarily with diversity in handsets. Phone manufacturers vary wildly in how (or if) they support certain fields and options, and different regions tend to see different manufacturers (For example, our customers predominantly use Oppo and Xiaomi phones, neither of which market their products in the United States). To make even more complex, LTE provides a large number of optional header fields and many different auth/ID workflows, which ends up creating a very wide range of corner-cases for our EPC to support.

End-of-the-Year Recap and 2019 Goals:

Though I didn’t know if we’d make it this far (or if I’d ever stop coding)… I’m proud to report that by the time December hit, we finally had stable, useful, functional LTE network running in Bokondini. I very seldom get bug reports now – and when I do, they’re predominantly fixed by something simple/stupid on my part. Most recently, I accidentally filled up the whole disk by enabling some basic logging tools to see how much traffic was being sent locally.

Moving forward into 2019, we’ve got a wide range of things we want to do, features we want to build, and research questions that we can finally start asking and answering. Now that the network stays running, our top engineering goals are code-cleaning, a couple feature-adds, and streamlining the deployment and configuration process, ideally to the point where a non-expert can download the project and get started without having to coordinate with us.

Research questions and curiosities range from the technical (how congested is the satellite link, how many users can we support?) to the economic (is this network profitable, how is it affecting the development of this community?) to the social (what are our users doing on the network?) to the personal (how much more fieldwork can I put into this project before my girlfriend dumps me?)

Obviously, these questions heavily intersect and interplay with each other. If we better understand what our users are doing on the network (social), we can build specific tools to support those actions (technical) which will undoubtably impact the overall usefulness of the network (economic). We’re hoping to really start digging into some of this research soon, especially with regards to using locally-hosted services as a way to relieve network congestion. As always, thanks for reading, and stay tuned for more!

Money Matters (Part 3)

This post is the third in a three-part series about the financial side of things. In Part 1, talked about the systems we built for us to monitor network usage and for our users to buy and sell credit. In Part 2, we covered our operating expenses, and discussed some of the different considerations and models of how to bill users. In this part, I’m going to wrap things up by sharing some of the tricks and approaches that we can use to improve network performance and service for our users, and then providing some alternate funding models.

Local Services

In our network, a huge difference exists (in terms of network performance and congestion as well as cost) between traffic that stays local and traffic that goes out over the satellite link to connect to the Internet. Though our backhaul connection from our tower to the Internet is only 1 Mbps, the fronthaul connection from phones to the tower is much higher – we’ve seen speeds of 75 Mpbs for an individual user, and LTE offers theoretical speeds of up to 150 Mbps. Local communication, like sending a file from one person to another, doesn’t actually use the satellite link at all, and works even if we get cut off from the Internet. This brings up the question of “should we offer a local-only option?”

If we do decide to provide the option of local-only service to users, we also have to figure out what it should cost. There’s one argument that says we should charge them only for power, because that’s the really the only cost service they’re using. On the other hand, there are very good reasons for us to charge them more, and put that money towards the cost of the backhaul connection. Not only does everyone in the community benefit from Internet connectivity (non-paying users often request paying users to perform services for them), but we could potentially offer “lower priority” Internet connectivity when available. Remember – because the backhaul cost is fixed per month, every second we’re not using the Internet is a second that we paid for and simply didn’t use. Moreover, putting more money towards the backhaul could result in cheaper Internet access per user, which in turn incentivizes more users to join the network and creates one of those positive cycles I talked about in Part 2.

At the same time, we’re also actively working to improve what services are “local” by hosting them within the community on our server. If Gmail struggles behind a slow satellite connection, that’s a great reason for us to host a local email server. The exact same thing could be said for higher-bandwidth services, like streaming video with our own media server, or caching web content. I’ll save this discussion for another post, but we packaged CoLTE with many such services, including OpenStreetMaps and a chat server. Going further, improving Internet/Web performance at the network edge is one of my primary research interests, and I hope to pursue this topic at a painful level of detail before too long.

Limited Services

In addition to a local services option, we’ve also considered a “limited services” package that provides Internet access only to a select set of websites/services at a reduced (or free) cost. This package would likely be focused around calling and texting services (e.g. WhatsApp and Skype), but could also include other select websites such as Wikipedia.

The “limited services” idea stems primarily from the observation that many of the most socially useful services are also very low-bandwidth. For example, a standard-quality VoIP call requires only 20 kbps of bandwidth, potentially less under certain conditions, and WhatsApp text messages barely any bandwidth at all (Though these conclusions are intuitive and based largely on observed experience, we’re hoping to formally quantify this traffic soon!) These characteristics stand in stark contrast to other Internet services that provide a relatively low social benefit at a much larger cost of bandwidth, such as video streaming or online gaming. This dichotomy naturally led us to discussions around the idea of “we’re not making any money off of WhatsApp anyways, so why not just give it away for free?” Going even further, I’m almost certain that we could strongly prioritize VoIP traffic such as Skype and WhatsApp, and dramatically improve the social utility of our network, without our general Internet users even noticing the difference.

Thanks For Reading!

A fascinating theme that’s emerged in this series of posts is the intersection of technical engineering, socio-economic engineering, and what’s best for the individual community. One of the major takeaways of this entire series should be that the options are literally endless – especially since when the community is it’s own ISP, their interests are perfectly aligned. As we deploy additional networks in different areas, we’re hoping to build and research literally all of these topics, with the end goal being something as close to a set of “best practices” as possible. As always, thanks for reading, and please comment or reach out if you’ve got any thoughts or questions.

Our Time In Bokondini

First and foremost: a sincere apology to everyone following along! After we left for Papua, everything picked up a lot of steam and was a bit uncertain, and the pace didn’t let up until the end of the year. It’s really a shame I didn’t live-blog the process, but in the next posts I’ll try to re-cap the back half of the year. In 2019, I’m hoping to do better, especially since so many people have reached out to me about the blog asking to know how it went!

Getting To Bokondini

If I had to describe Papua in a single word, it would have to be “remote.” It took our team four separate flights over 3-4 days to arrive in Wamena, the major city with an airport in the Baliem Valley region. From there, we met up with Ibel and Helga, who helped us buy our necessary foodstuffs (food for three people for three weeks is no joke), navigate a somewhat convoluted visa process, and figure out the way to Bokondini – a three hour truck ride through muddy off-road terrain, including a river crossing. Making all of these connections while lugging around approximately 150 pounds of telecom equipment was a comical process, to say the least, but we (and the gear) made it safe and sound to Bokondini without any incidents to speak of.

In Bokondini, we stayed in a beautiful house owned by Scotty and Heidi Wisley. The Wisleys are a missionary couple who’ve been working in Bokondini for the past ten years, building out community leadership, infrastructure, and a local school. As it turns out, Scotty and Heidi were in Malaysia at a conference when we were in town, and they were gracious enough to lend us the use of their home during their absence.

Context Dictates Constraints

Even having spent a lot of time in various off-grid locations, I was particularly impressed by the infrastructural situation in Bokondini: a wild, anachronistic, perfect mix of old and new technologies, high-tech solutions paired with rural ingenuity and know-how.

Power:
Power to the village is provided by a combination of a solar array mounted on the schoolhouse roof and a small hydroelectric generator hooked up to a nearby stream, both inputs are wired to a bank of car-batteries. From the batteries, 12VDC is converted to 220VAC via a charge controller, and distributed throughout the village via wires hung from house to house in what can best be described as a nano-grid.

IMG_1315.JPG
Bokondini’s electrical system

For purposes of power cleaning (as well as ensuring significant power for essential functions), the power is turned off daily from approximately 9pm to 6am. Given that the solar panels are already paid for and owned entirely by the community, the system works without any meters or billing – though during times with lower power generation, the news spreads through the community, and members are informally asked to conserve power (i.e. be sparing with their use of stoves and hot water).

 

Water:
There seemed to be no water infrastructure at all, micro or otherwise. Scotty and Heidi’s house is completely self-sufficient, and the taps draw unfiltered water from two main sources: a nearby river, or a rainwater catchment system. We then either boiled the water or ran it through a large, ten-liter Katadyn filter for drinking.

IMG_1387.JPG
“Just hand them up, it’ll be fine”

Internet:
When we arrived, Bokondini already had very limited Internet access, via a 1Mbps satellite link. This link came into the main “technical room” above the schoolhouse, at which point it was turned into a WiFi hotspot, but restricted to the teachers in the community to do educationally things only. Informally, we were told that one of the main drivers of network traffic was teachers using YouTube as an educational resource – more on this below.

IMG_1418.JPG
Raising the tower

Mounting The Tower

Once we were settled in, the work began in earnest. In a nutshell, the main effort was to lower a pole-tower, remove the (unused) dish on the top, replace it with our LTE gear and antennas, and secure the pole back the way it was. Once mounted, we would connect our EPC to the gear on the tower, power the whole thing on, and (hopefully) have a small-scale LTE network that would cover the entire community.

Mounting the gear on the tower was no small feat. Thankfully, with help and guidance from Ibel, who knows the infrastructure in Bokondini like the back of his hand, we got the job done, and done well. Twice a fast-moving thunderstorm chased us off the corrugated roof, and even after the work was done, we had to pull the tower down to replace a malfunctioning eNodeB… but ten days after we arrived, we finally raised the tower with our gear for the last time.

Meanwhile, when we weren’t climbing up on rooftops, we were hard at work configuring the EPC to stay online across a wide range of unorthodox (yet not uncommon) situations. Despite having verified our EPC’s functionality in a controlled lab setting, keeping it alive out in Bokondini required a substantial amount of in-the-field programming. The considerations ranged from the obvious (make sure the box turns itself back on after a power failure) to the subtle (ensuring that if one component crashes, all the correct other components reboot so that the network recovers into a functional and useful state). All in all, the process made for a fascinating crash-course in Keeping Systems Alive, though I’d be lying if I said I enjoyed the course while it was happening.

IMG_1453.JPG
A job well done!

It’s well known that the best way to flush out bugs is to dogfood a product – so during this time, we aggressively dogfooded the network as hard as we could. Connect a phone, walk around town, run a hotspot back at the house all day – all the while, obsessively looking for any network disconnects or stumbles. One of the most noteworthy bugs seemed to be triggered by a specific phone in the community that tried to connect to our network (and subsequently crashed it) every day, once a day, around 3pm. While the fix was anticlimactic (a single incorrectly-parsed header field), the workflow was hilarious: dig through the logs, make a guess, try to write a fix… and then wait for 3pm the next day to see if it worked.

Finally, on our second-to-last day, we had a stable network: No crashes, no bugs, no stumbles: just reliable Internet, from lights-on to lights-off. At this point, we hadn’t yet distributed any SIM cards for a number of reasons, both regulatory and social. We turned off the network to save power, left some instructions for turning it back on (just plug it in), and hit the road, for another three day’s worth of flights back to Seattle – cautiously optimistic about the impending network launch, but without much of a clue what to expect.

Okay, that’s a wrap for now. In Part 2 I’ll cover the initial network launch, the process of bringing additional phones/users online, and some of our initial results of how the network’s doing. Spoiler alert: It’s going really, really well! 🙂

Off To Indonesia

Hi everyone,

Today, the rubber hits the road! My partner, Audrey, and I spent all Saturday running last minute errands, sending last-minute email questions (“should I bring a camping stove or not?”) and packing all sorts of equipment into two rubbermaid totes and our personal bags. Packing’s been a comically awkward fusion of my outdoor life with my professional life, packing everything from tarps and mosquito nets and hiking boots to literally 110 pounds of sensitive electronic equipment, all crammed into two rubbermaid totes and our personal bags. We’ve got eNodeBs, EPCs, one thousand SIM cards, mounting hardware, antenna cables, ethernet cables and routers, extras of everything, a power strip, and all sorts of other tools, all *just barely* under the airline weight requirements and padded as best we could with styrofoam. At 11pm we lugged it all down to the airport, and at 2am we were off to Jakarta. If you check it out on a map, you’ll notice that Jakarta’s actually way out of the way from Bokondini (as in, 7 extra hours of flying), but Jakarta’s where international flights have to land in order to clear customs and enter Indonesia.

We met our teammate Matt (who flew in from fieldwork in the Philippines) at the airport, and after a day or two, the three of us will head into the field, with a small regional flight taking us from Jakarta to Jayapura, an even smaller flight from Jayapura to Wamena, and then finally a truck ride out to Bokondini. Once in Bokondini, our plan is to stay with a couple of Christian missionaries who have been working in the local community for the past twenty years or so. Scott and Heidi have been instrumental in helping us figure out what the on-the-ground logistics will be, what we’ll need to bring and do, not to mention opening their homes to us! We’re already incredibly grateful for their support, and we haven’t even arrived yet 🙂

Okay – well that wraps it up for this post, stay tuned for more content and pictures as we move through Indonesia and get further out into the jungle!

Money Matters (Part 2)

This post is Part 2 in a series about money and billing. Part 1 talked about the technical system we built, and how users purchase data and transfer credit. In this post, I’m going to talk about the basic economics behind our network, including what our expenses look like, what kind of revenue we need to generate, and some different billing models and their tradeoffs.

What Are Our Operating Expenses?

Our network’s recurring costs are relatively straightforward, and consist of two main components: power and Internet backhaul – both of which are surprisingly flat in our case. With respect to power, this is somewhat obvious: the cost to keep the tower running 24/7 is fixed, and depends only on the wattage of the cell tower and the price per kWh in the community. When it comes to backhaul, I didn’t quite know what to expect, but we ended up finding a 1MB/sec (yep, you read that number right) satellite Internet connection for 300 USD/month (yep, you read that number right, too).

Billing Models

As we covered in Part 1, it’s super easy to figure out how many bytes a user has sent or received, and it’s easy for users to buy and sell packages… but that’s where the easy part stops. This brings us to the real question of “how do we want to charge our users?” Note that I said “how” and not “how much” because even before we talk about rates, there’s a bunch of different ways for us to do this. The two main ways we could do billing is either by the amount of data used, as I described previously, or a flat monthly membership model. However, we could also give users discounts on specific services, provide cheaper rates for larger packages, or bill local traffic differently. There’s a lot of different options here, and they’re super important because billing is our main (only) tool to incentivize and influence user behavior.

Incentivizing Network Behavior

If you’ve ever split the cost of the Internet between roommates, or shared a cell phone family plan, then you’ve faced the exact problem we’re up against. Should we just split the cost evenly, or do we make the heavier users pay more? What if someone wants faster speed than the rest of us – do we make them pay for it on their own, or do we split the additional cost because we all benefit? What if someone can’t afford the full price – do we kick them off the network entirely, or take the money that they can afford? Should we offer discount rates for using the Internet during slow times? What’s the actual cost of adding a user to the network, anyway?

The fact that our Internet cost per month is fixed makes these questions much trickier than they might be. If our backhaul was charged per-MB (as a lot of satellite connections do), we’d simply set our rates accordingly, pass the cost onto our users, and be done with it. However, since we’re paying the same monthly rate whether we use the link or not, every second the network isn’t being fully used can be considered a second of service lost. Under this line of thought, if we’re paying for it anyways, let’s use the network as much as possible – and this makes the case for subscription-based pricing that encourages full network utilization.

Unfortunately, the flip size of full utilization is congestion: if too many people want to use the network at any one given moment in time, network performance suffers and slows down for everyone. Inevitably, the best way to reduce usage is to (1) raise rates per MB and (2) bill based on data usage, rather than a flat subscription – even though both of these approaches contradict what I just wrote in the previous paragraph.

The Minimum Viable Network

First, some basic math: Assuming that we split the cost evenly between all users, a network size of ten users would mean each user pays 300 / 10 = 30 USD/month for the satellite connection. If three users decide that they can’t afford it anymore, the cost would rise to 300 / 7 = 42 USD/month, potentially driving other users off the network and raising prices further in what the insurance markets call a “death spiral.” Conversely, the opposite could also occur: if three new users decided that they could afford Internet (at an advertised price of 30 USD/month) the price would drop to 300 / 13 = 23 USD/month and potentially attract even more signups.

This idea can be reduced to what we call the “Minimum Viable Network,” defined as the minimum number of users we need to avoid that death spiral. As long as we’re above the MVN, life is good, and when more people signup we could choose to either give everyone a discount or build up our cash reserves – but once we drop below the MVN, we’ll start to see more drop-offs and a steep price increase that feeds back on itself. This, more than anything else, is the big problem of a fixed cost connection: in a pay-per-MB world, we’d see our costs naturally decrease as users leave the network, but as it stands we’re on the hook for 300 USD/month no matter how many customers we have.

Alternate Funding Models

So far, I’ve mentioned the pay-per-MB model and the monthy subscription model as our two major ways of creating revenue – but there are countless other approaches to keeping the lights on. More and more national level governments are funding/subsidising programs aimed towards connecting rural populations, such as the Universal Service Fund in the United States. Simultaneously, local governments are using taxes to build out telecom infrastructure, simply because it’s the easiest and best way to fund an infrastructural project that comes with a large range of social benefits, including-but-not-limited to education, medicine, commerce, and financial inclusion.

Wrapping Things Up

By now, hopefully I’ve made it clear that this is a challenging space that we’re working in, and provided a useful overview of the tradeoffs. There are no easy answers, and this is probably going to be a topic we explore and experiment with in-depth over our initial deployment. Stay tuned for Part 3, where I’m going to cover some other techniques and approaches that we can build to reduce congestion and cost by keeping things local when possible.

Update: Part 3 is now available.

Money Matters (Part 1)

Okay, okay, let’s talk about money… everyone’s favorite topic, right? No, but seriously: In all my efforts to get CoLTE up and talking to phones, I didn’t really think about the (glaringly obvious) fact that for us to offer service to the community, we’ll have to pay for it somehow. That means building a billing platform of some sort – a task which has turned out to be both really easy and really hard.

There’s a lot of stuff to cover on this topic, so I’m breaking it up into a series of posts. This post will introduce the basic technical concepts of what we built, how we keep track of network usage, and how customers interact with our system.

Keeping Track of the Bits

One of the really neat things about LTE, as compared to other cell networks, is that in LTE, everything is just IP packets. This actually lets us dramatically simplify the technical part of our billing system: we note the IP address that we assign each use, use ntopng to keep track of how many bytes a specific IP address sends and receives… and that’s it! This takes care of data, obviously, but also does everything else. Text messages? They’re just IMS traffic from one IP address to another, and counted in the data. Phone calls? VoLTE runs over IP, done. What if a user wants to use WhatsApp or Skype instead? We really couldn’t care less – it’s all just data. Count the bytes, multiply it by the rate we choose (we’ll discuss rates in Part 2) and call it a day.

Credit Purchase and Resale

In addition to monitoring network usage, we had to create ways for users to “top up,” or put money on their account, and then purchase service with that money. In the States, accounts are typically post-paid (i.e. you get a bill after the fact) whereas in a lot of other places, including Bokondini, all accounts are only pre-paid (i.e. you pay someone for credit ahead of time, and the network cuts you off if/when you hit zero). In most pre-paid networks, users top up or transfer credit via a text message interface, or in person at a store. However, since our system is LTE, and doesn’t have to be backwards-compatible with 2G, we decided to forego the traditional interfaces in place of just running a locally-hosted website. We ended up building out this site to handle a wide range of tasks, so by going to “network.bokondini” users see a website that provides them with a wide range of information about their account (data consumed, balance remaining, connection status, etc.) and enables them to perform basic operations such as using their current balance to buy a data package.

all3.png

Minutes as a “Fiat Currency”

To make matters even more interesting, pre-paid phone credit is often re-sold from one user to another, or traded for other goods and services. In Bokondini, there’s an entire retail economy of corner shop vendors that buy large amounts of cell phone credit at a bulk discount and then resell smaller portions to individual shoppers (e.g. “transfer ten dollars of credit, or 100 minutes, from my account to this phone number”) in exchange for cash.

The notion of credit resale actually makes our lives significantly easier, for two main reasons. First off, it creates a clear division between trading credit from one user to another, and “fiat”-ing new credit into the system to be traded. Obviously, the second action has to be very closely supervised, but we can get away with creating a bottleneck here (maybe we only allow one or two people to introduce new credit into the system) because the credit will then be traded and disperse through the community via these resellers.

Second off, the notion of “fiat” here makes us a lot safer and more resilient to many different scams and attacks. This is because with the one exception of creating credit, no real money ever changes hands or goes away. If one user cheats or steals from the account of another user, or hacks the system to add credit to their account, we can simply undo the transaction, or manually edit the balances ourselves: no value is ever actually lost for good.

All right – that’s enough money talk for today. Stay tuned for Part 2, coming out soon, in which I talk about the different possible billing models, what our expenses look like, and how we need to generate revenue.

Update: Part 2 and Part 3 are now available.

 

%d bloggers like this: