In early November 2017 the Open Fog Consortium (OFC) held the first conference focused exclusively on Fog Computing: the Fog World Congress. Since then, we’ve been reading more and more about Fog and Edge Computing in popular literature and how it might challenge or benefit Cloud Computing. Fog Computing has been named among the top five innovation trends for 2018 and is predicted to have a $700m market share by 2024. Naturally, with our interest in heterogeneous computing techniques, we’ve become very interested in Fog Computing. In this post we explore what Fog Computing is all about and how it compares to Cloud, Cloudlets, Mobile Cloud, Edge, Mobile Edge, and Serverless computing techniques.
Before starting a discussion about the Fog Computing, it is useful to remind the reader about similar technologies and how they fail to address some application areas.
Cloud, Mobile Cloud, Cloudlets, and Serverless Computing
Cloud Computing is a decentralized computing model in which several, geographically separated data centers provide computing, storage, and networking capabilities on demand. The Cloud provides Infrastructure as a Service (IaaS) in which servers and storage can be provided on demand, Platform as a Service (PaaS) where whole applications are built and run, Software as as Service (SaaS) in which software that would nominally be deployed to a user’s computer is hosted in the Cloud (e.g. Microsoft Office 365 or Google Docs), and Network as a Service (NaaS) where Virtual Private Networks (VPNs) and bandwidth are provided to clients on demand. Recently cloud providers have started to offer Function as a Service (FaaS, often called “serverless computing”) in which individual functions within a greater application are provided as services. Pricing for these services vary between providers, but it is generally accepted that using the cloud is less expensive than purchasing and maintaining servers in the long run for most use cases.
Mobile Cloud Computing (MCC) is probably what most consumers imagine when they think “Cloud Computing.” MCC attempts to resolve the shortcomings of mobile computing (e.g. performance, software deployment on heterogeneous architectures, and security) by offloading some of the functionality to the Cloud. Most of the Cloud applications that you can name (e.g. Google Docs, Office 365, etc.) are MCC applications. The primary benefit to users is that MCC extens battery life, dramatically increases available storage, and significantly enhances processing power for demanding applications. Compared to native applications, MCC applications come with increased latency and significantly more bandwidth utilization. Due to their dependence upon the cloud, MCC applications should employ some form of graceful failure mode during a loss of connectivity. Unfortunately, most MCC engineers assume perpetual Cloud connectivity, which may not be the case for users, especially those in rural, mountainous, or developing regions.
Although the Cloud is exceptionally versatile, it fails to address some use cases. In particular, applications that require low latency, produce prodigious quantities of data, or require offline operations cannot always be served optimally by the Cloud. Examples of applications that do not play well with the Cloud include oil and gas well monitoring systems which require exceptionally low latencies, computer-vision assisted video surveillance that consumes a lot of bandwidth, and emergency responder coordination that often operates in areas without network access.
The first attempt to address the shortfalls of Cloud Computing was a “cloud-in-a-box” solution called “Cloudlets.” Cloudlets are small, inexpensive, and maintenance free devices that provide microservices (using virtual machines or containers) that serve as local or intermediate cloud servers for nearby mobile devices. Cloudlets are installed along the network edge, most often within a business or other user-focused location. Two examples of Cloudlets are Network Attached Storage (NAS) units with Cloud synchronization (e.g. Synology’s CloudSync) and the (failed) PlugComputer.
Edge and Mobile Edge Computing
The proliferation of IoT devices and the resulting deluge of data encouraged the development of Edge Computing methods. As implied by its name, Edge Computing processes data at the edge of a network, typically near Internet border routers. By processing data locally, Edge Computing offers applications decreased latency, reduced (Internet) bandwidth, and the potential for offline functionality. Much like Cloudlets, most Edge Computing solutions take a “data center in a box” approach in which everything needed for an application to execute is contained within a field-deployable unit. This includes networking, storage, and computational functions. Unlike Cloudlets, which tend to minimize power consumption, Edge Computing boxes can range from extremely mobile, low-powered devices (e.g. Tactical Edge Computing from CMU’s Software Engineering Institute) to racks of traditional servers (i.e. Cisco Flexpod Express). Due to hardware and operating system variations, Edge Computing relies heavily on virtualization and container technologies for application deployment.
Mobile Edge Computing (MEC) is a subclass of Edge Computing that seeks to provide Edge Computing capabilities to devices within the first hop of wireless networks. For example, MEC could be implemented as a small data center at the base of a cell tower (or similar location in a network topographic sense) that serves as a Content Delivery Network (CDN) server for a video streaming service. The benefits of MEC are many: mobile clients receive significantly reduce latency, energy savings, geographical awareness, context awareness, and enhanced security and privacy. The provider of the MEC node benefits from reduced external bandwidth utilization and increased revenue due to collocation. Present MEC applications include image analysis (e.g. facial recognition), video stream analysis, augmented reality, IoT, and connected vehicles.
Until recently, the self-contained nature of most Edge Computing hardware resulted in applications that were disconnected from the Cloud; however, recent work within the Multi-Access Edge Computing Initiative has encouraged more interoperability between the Cloud and Edge Computing solutions.
Fog Computing: Emerging from the mist
Much like Cloud Computing in its infancy, Fog Computing has suffered from a lack of definition that has only become more murky with each popular press articles. Many articles often use Edge Computing synonymously with the Fog, even though the two are distinctly different architectural approaches.
Fog computing is an innately hierarchical approach to computing, networking, and storage that spans the space between the Cloud and end user (or IoT/IIoT) device. At present there are two authoritative definitions to Fog Computing. The first is a draft definition from the National Institute of Standards and Technology:
“Fog computing is a horizontal, physical or virtual resource paradigm that resides between smart end-devices and traditional cloud or data centers. This paradigm supports vertically-isolated, latency-sensitive applications by providing ubiquitous, scalable, layered, federated, and distributed computing, storage, and network connectivity.”
And the second being the definition of Fog Computing from OpenFog Consortium (which we’ve abridged slightly)
“Fog computing is a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things. It is a: (a) Horizontal Architecture, (b) Cloud-to-Thing continuum of services, and (c) System Level solution.”
Although the definitions differ slightly, the salient aspect is that Fog Computing provides computing, networking, and storage resources between the Cloud and end user / device. Thus, unlike Cloudlets, that exist on the edge and serve as an intermediary cloud server, and Edge Computing, that concentrates applications near the network edge, Fog Computing permits application components to be deployed where they make the most sense from an operational, cost, or performance perspective anywhere along the Cloud-to-client/thing continuum.
Illustrating the differences
To demonstrate the differences between Cloud, Cloudlets, Edge, and Fog Computing, let us consider three potential applications: network storage, video surveillance, and a video distribution system.
The first scenario we will consider is network storage. Potential deployments for all four systems are shown in the following figure:
In this scenario, we imagine User 1 wishes to share a 100 MB file User 2 in the same town. For simplicity, let us assume that they have symmetric Internet connections with 22 Mbps of bandwidth (the average upload speed in the US as of January 2018).
In the case of Cloud storage, user 1 and user 2 interact directly with the cloud server. The initial connection will have a latency of 10-100 ms and the transfer will take ~36 seconds to complete.
For the Cloudlet deployment, user 1 interacts with the local NAS server instead of the cloud server. Their the initial connection takes < 1 ms to establish and the transfer operation could complete in as quickly as 1 second. The NAS will, in turn, transparently synchronize the file to the cloud server. User 2 will incur similar latencies to and transfer times to the previous scenario.
With Edge Computing, a local NAS exists, but it is (probably) disconnected from the cloud. Here the user must again interact directly with the cloud server costing 10-100 ms in latency and ~36 seconds to complete. User 2 will incur similar latencies to and transfer times to the previous scenario.
The Fog solution functions very similarly to the cloudlet solution, except it could optionally introduce intermediate storage locations. In the case that both users are on the same ISP, the Fog application could cache the files on the local ISP’s servers, thereby reducing bandwidth consumption by 200 MB.
Next consider a video surveillance application whose network topology is visualized below:
In the video surveillance scenario, imagine a system with three video cameras (C1, C2, C3) connected to the Internet using a common 22 Mbps connection. In all four cases, the network diagrams are very similar consisting of some gateway device connected to the Internet. What is different is where the processing is conducted.
In the Cloud case, the cameras upload video frames when motion is detected to the Cloud for analysis. If a frame is interesting, it alerts the user through the Internet. Although this configuration permits the application of extremely sophisticated computer vision techniques, it has heavy bandwidth utilization and does not function if a network outage is incurred.
In the case of Cloudlet, Edge, and Fog deployments, these detrimental effects are avoided by processing and storing video frames on-site. Due to its lower-powered hardware, a Cloudlet deployment would likely filter the data and upload subframes to the Cloud for further analysis which, in turn, notifies the user. Meanwhile Edge and Fog nodes could apply similar computer vision functionality to what is found in the cloud and notify the user directly. Fog has the added advantage that additional processing and notification generation could be performed within an additional data center. Note that in the case of a network outage the Cloudlet, Edge, and Fog deployments continue operation, but user notification would be disrupted unless a backup network connection (e.g. cellular) existed.
Video Distribution System
Lastly let us consider a video distribution system as depicted below:
For the video distribution scenario, image five users who wish to watch the latest episode of popular TV program that is “x” in size. In video distribution systems, it is exceptionally rare for the content provider (e.g. Netflix) to serve the video directly, instead they rely on a series of geographically distributed Content Distribution Network (CDN) nodes. CDNs can be separate service providers, or provider-managed servers in various cloud data centers.
In a Cloud Computing application, each user would interact with one of the CDNs and download the episode, in its entirety, over the user’s network connection. Thus the total download bandwidth cost is 5x to the CDN, 3x to ISP1 and 2x to ISP2.
The same scenario implemented as a Cloudlet or Edge Computing application reduces bandwidth consumption through localized caching. In the case of Cloudlets, users interact with the local Cloudlet instance (visualized as routers R1, R2, and R3 in the above diagram) and the Cloudlet, in turn, grabs data from the CDN. Inspecting the diagram above, the download costs to the CDNs are 3x, ISP1 1x, and ISP2 2x. In an ideal Edge Computing application of this type, the users would interact with the provider, but data would be served from an Edge cache.
In the Fog Computing application of this scenario, the location where data are cached is mutable. As such, data could be cached and moved to the best location once two users request the same episode. In the diagram above, the users would interact with some hierarchical distribution of Fog nodes with the provider serving as the root and CDNs as the first set of intermediate nodes. Upon receiving requests for the episode, ISP1 and ISP2 would download precisely one copy of the episode from a CDN and cache it. Upon receiving request from clients, the ISP could spin up a caching server on the end-user’s routers to temporarily store the video. As depicted above, the download bandwidth cost to the CDNs is 2x, ISP1 is 1x, ISP2 is 2x.
Fog Computing is an emerging computational architecture that provides an innately hierarchical method to computing, networking, and storage that spans the space between the Cloud and end user or device. It best serves applications that require low latency, have high bandwidth requirements, or must maintain operation when disconnected from the Cloud. Unlike Cloudlets, which are full-scale Cloud deployments on the network edge, or Edge Computing, which concentrates computational tasks near a network’s edge, Fog permits applications to deploy their components along a Cloud to user/thing continuum where it makes the most sense from a latency, bandwidth, or cost perspective.
Here at Pratum Labs we are really excited about the prospect of Fog Computing and intend to avidly blog about its application to various market spaces. In our next blog post we are going to discuss existing Fog Computing frameworks, including Cisco IOx, FogHorn System’s Lightning, PrisimTech’s Vortex, and Google Connections.If you wish to be notified of future updates on this topic, please subscribe to our mailing list below or follow us on Twitter.