Brooks Canavesi Logo
  • Home
  • Blog
  • Contact
Brooks Canavesi Logo

  • Home
  • Blog
  • Mobile App Development

Chaos Engineering: Breaking Things on Purpose

Written by Brooks Canavesi on November 5, 2017. Posted in Blog, Mobile App Development, Software & App Sales, Technology trends

Modern distributed systems, especially within the realm of cloud computing, have become so complex and unpredictable that it’s no longer feasible to reliably identify all the things that can go wrong. From bad configuration pushes to hardware failures to sudden surges in traffic with unexpected results, the number of possible failures is too large for flawless distributed systems to exist. If perfection is unattainable, what else is there to strive for? Resiliency.

“The cloud is all about redundancy and fault-tolerance. Since no single component can guarantee 100 percent uptime (and even the most expensive hardware eventually fails), we have to design a cloud architecture where individual components can fail without affecting the availability of the entire system. In effect, we have to be stronger than our weakest link,” explains Netflix.

To know with certainty that a failure of an individual component won’t affect the availability of the entire system, it’s necessary to experience the failure in practice, preferably in a realistic and fully automated manner. When a system has been tested for a sufficient number of failures, and all the discovered weaknesses have been addressed, such system is very likely resilient enough to survive use in production.

“This was our philosophy when we built Chaos Monkey, a tool that randomly disables our production instances to make sure we can survive this common type of failure without any customer impact,” says Netflix, whose early experiments with resiliency testing in production have given birth to a new discipline in software engineering: Chaos Engineering.

What Is Chaos Engineering?

The core idea behind Chaos Engineering is to break things on purpose to discover and fix weaknesses. Chaos Engineering is defined by Netflix, a pioneer in the field of automated failure testing and the company that originally formalized Chaos Engineering as a discipline in the Principles of Chaos Engineering, as the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.

Chaos Engineering acknowledges that we live in an imperfect world where things break unexpectedly and often catastrophically. Knowing this, the most productive decision we can make is to accept this reality and focus on creating quality products and services that are resilient to failures.

Mathias Lafeldt, a professional infrastructure developer who’s currently working remotely for Gremlin Inc., says, “Building resilient systems requires experience with failure. Waiting for things to break in production is not an option. We should rather inject failures proactively in a controlled way to gain confidence that our production systems can withstand those failures. By simulating potential errors in advance, we can verify that our systems behave as we expect—and to fix them if they don’t.”

In doing so, we’re building systems that are antifragile, which is a term borrowed from Nassim Nicholas Taleb’s 2012 book titled “Antifragile: Things That Gain from Disorder.” Taleb, a Lebanese-American essayist, scholar, statistician, former trader, and risk analyst, introduces the book by saying, “Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.”

On his blog, Lafeldt gives another example of antifragility, “Take the vaccine—we inject something harmful into a complex system (an organism) in order to build an immunity to it. This translates well to our distributed systems where we want to build immunity to hardware and network failures, our dependencies going down, or anything that might go wrong.”

Just like with vaccination, the exposition of a system to volatility, randomness, disorder, and stressors must be executed in a well-thought-out manner that won’t wreak havoc on it should something go wrong. Automated failure testing should ideally start with the smallest possible impact that can still teach something and gradually become more impactful as the tested system becomes more resilient.

The Five Principles of Chaos Engineering

“The term ‘chaos’ evokes a sense of randomness and disorder. However, that doesn’t mean Chaos Engineering is something that you do randomly or haphazardly. Nor does it mean that the job of a chaos engineer is to induce chaos. On the contrary: we view Chaos Engineering as a discipline. In particular, we view Chaos Engineering as an experimental discipline,” state Casey Rosenthal, Lorin Hochstein, Aaron Blohowiak, Nora Jones, and Ali Basiri in “Chaos Engineering: Building Confidence in System Behavior through Experiments.”

In their book, the authors propose the following five principles of Chaos Engineering:

Hypothesize About Steady State

The Systems Thinking community uses the term “steady state” to refer to a property where the system tends to maintain that property within a certain range or pattern. In terms of failure testing, the normal operation of the tested system is the system’s steady state, and we can determine what constitutes as normal based on a number of metrics, including CPU load, memory utilization, network I/O, how long it takes to service web requests, or how much time is spent in various database queries, and so on.

“Once you have your metrics and an understanding of their steady state behavior, you can use them to define the hypotheses for your experiment. Think about how the steady state behavior will change when you inject different types of events into your system. If you add requests to a mid-tier service, will the steady state be disrupted or stay the same? If disrupted, do you expect the system output to increase or decrease?” ask the authors.

Vary Real-World Events

Suitable events for a chaos experiment include all events that are capable of disrupting steady state. This includes hardware failures, functional bugs, state transmission errors (e.g., inconsistency of states between sender and receiver nodes), network latency and partition, large fluctuations in input (up or down) and retry storms, resource exhaustion, unusual or unpredictable combinations of inter-service communication, Byzantine failures (e.g., a node believing it has the most current data when it actually does not), race conditions, downstream dependencies malfunction, and others.

“Only induce events that you expect to be able to handle! Induce real-world events, not just failures and latency. While the examples provided have focused on the software part of systems, humans play a vital role in resiliency and availability. Experimenting on the human-controlled pieces of incident response (and their tools!) will also increase availability,” warn the authors.

Run Experiments in Production

Chaos Engineering prefers to experiment directly on production traffic to guarantee both authenticity of the way in which the system is exercised and relevance to the currently deployed system. This goes against the commonly held tenet of classical testing, which strives to identify problems as far away from production as possible. Naturally, one needs to have a lot of confidence in the tested system’s resiliency to the injected events. The knowledge of existing weaknesses indicates a lack of maturity of the system, which needs to be addressed before conducting any Chaos Engineering experiments.

“When we do traditional software testing, we’re verifying code correctness. We have a good sense about how functions and methods are supposed to behave, and we write tests to verify the behaviors of these components. When we run Chaos Engineering experiments, we are interested in the behavior of the entire overall system. The code is an important part of the system, but there’s a lot more to our system than just code. In particular, state and input and other people’s systems lead to all sorts of system behaviors that are difficult to foresee,” write the authors.

Automate Experiments to Run Continuously

Automation is a critical pillar of Chaos Engineering. Chaos engineers automate the execution of experiments, the analysis of experimental results, and sometimes even aspire to automate the creation of new experiments. That said, one-off manual experiments are a good place where to start with failure testing. After a few batches of carefully designed manual experiments, the next natural level we can aspire to is their automation.

“The challenge of designing Chaos Engineering experiments is not identifying what causes production to break, since the data in our incident tracker has that information. What we really want to do is identify the events that shouldn’t cause production to break, and that have never before caused production to break, and continuously design experiments that verify that this is still the case,” the authors emphasize what to pay attention to when designing automated experiments.

Minimize Blast Radius

It’s important to realize that each chaos experiment has the potential to cause real damage. The difference between a badly designed chaos experiment and a well-designed chaos experiment is in the blast radius. The most basic way how to minimize the blast radius of any chaos experiment is to always have an emergency stop mechanism in place to instantly shut down the experiment in case it goes out of control. Chaos experiments should be built upon each other by taking careful, measured risks that gradually escalate the overall scope of the testing without causing unnecessary harm.

“The entire purpose of Chaos Engineering is undermined if the tooling and instrumentation of the experiment itself cause an undue impact on the metric of interest. We want to build confidence in the resilience of the system, one small and contained failure at a time,” caution the authors in the book.

Chaos at Netflix

Netflix has been practicing some form of resiliency testing in production ever since the company began moving out of data centers into the cloud in 2008. The first Chaos Engineering tool to gain fame outside Netflix’s offices was Chaos Monkey, which is currently in version 2.0.

“Years ago, we decided to improve the resiliency of our microservice architecture. At our scale, it is guaranteed that servers on our cloud platform will sometimes suddenly fail or disappear without warning. If we don’t have proper redundancy and automation, these disappearing servers could cause service problems. The Freedom and Responsibility culture at Netflix doesn’t have a mechanism to force engineers to architect their code in any specific way. Instead, we found that we could build strong alignment around resiliency by taking the pain of disappearing servers and bringing that pain forward. We created Chaos Monkey to randomly choose servers in our production environment and turn them off during business hours,” explains Netflix.

The rate at which Chaos Monkey turns off servers is higher than the rate at which server outages happen normally, and Chaos Monkey is configured to turn off servers during production hours. Thus, engineers are forced to build resilient services through automation, redundancy, fallbacks, and other best practices of resilient design.

While previous versions of Chaos Monkey were additionally allowed to perform actions like burning up CPU and taking storage devices offline, Netflix uses Chaos Monkey 2.0 to only terminate instances. Chaos Monkey 2.0 is fully integrated with Netflix’s open source multi-cloud continuous delivery platform, Spinnaker, which is intended to make it easy to extend and enhance cloud deployment models. The integration with Spinnaker allows service owners to set their Chaos Monkey 2.0 configs through the Spinnaker apps, and Chaos Monkey 2.0 to get information about how services are deployed from Spinnaker.

Once Netflix realized the enormous potential of breaking things on purpose to rebuild them better, the company decided to take things to the next level and move from the small scale to the very large scale with the release of Chaos Kong in 2013, a tool capable of testing how their services behave when a zone or an entire region is taken down. According to Nir Alfasi, a Netflix engineer, the company practices region outages using Kong almost every month.

“What we need is a way to limit the impact of failure testing while still breaking things in realistic ways. We need to control the outcome until we have confidence that the system degrades gracefully, and then increase it to exercise the failure at scale. This is where FIT (Failure Injection Testing) comes in,” stated Netflix in early 2014, after realizing that they need a finer degree of control when deliberately breaking things than their existing tool allowed for at the time. FIT is a platform designed to simplify the creation of failure within Netflix’s ecosystem with a greater degree of precision. FIT also allows Netflix to propagate its failures across the entirety of Netflix in a consistent and controlled manner. “FIT has proven useful to bridge the gap between isolated testing and large-scale chaos exercises, and make such testing self-service.”

Once the Chaos Engineering team at Netflix believed that they had a good story at small scale (Chaos Monkey) and large scale (Chaos Kong) and in between (FIT), it was time to formalize Chaos Engineering as a practice, which happened in mid-2015 with the publication of the Principles of Chaos Engineering. “With this new formalization, we pushed Chaos Engineering forward at Netflix. We had a blueprint for what constituted chaos: we knew what the goals were, and we knew how to evaluate whether or not we were doing it well. The principles provided us with a foundation to take Chaos Engineering to the next level,” write Casey Rosenthal, Lorin Hochstein, Aaron Blohowiak, Nora Jones, and Ali Basiri in “Chaos Engineering: Building Confidence in System Behavior through Experiments.”

The latest notable addition to Netflix’s Chaos Engineering family of tools is ChAP (Chaos Automation Platform), which was launched in late 2016. “We are excited to announce ChAP, the newest member of our chaos tooling family! Chaos Monkey and Chaos Kong ensure our resilience to instance and regional failures, but threats to availability can also come from disruptions at the microservice level. FIT was built to inject microservice-level failure in production, and ChAP was built to overcome the limitations of FIT so we can increase the safety, cadence, and breadth of experimentation,” introduced Netflix their new failure testing automation tool.

Although Netflix isn’t the only company interested in Chaos Engineering, their willingness to develop in the open and share with others has had a profound influence on the industry. Besides regularly speaking at various industry events, Netflix’s GitHub page contains a wealth of interesting open source projects that are ready for adoption.

Chaos Engineering is also being embraced by Etsy, Microsoft, Jet, Gremlin, Google, and Facebook, just to name a few. These and other companies have developed a comprehensive range of open source tools for different use cases. The tools include Simoorg (LinkedIn’s own failure inducer framework), Pumba (a chaos testing and network emulation tool for Docker), Chaos Lemur (self-hostable application to randomly destroy virtual machines in a BOSH-managed environment), and Blockade (a Docker-based utility for testing network failures and partitions in distributed applications), just to name a few.

Learn to Embrace Chaos

If you now feel inspired to embrace the above-described principles and the tool to create your own Chaos Engineering experiments, you may want to adhere to the following Chaos Engineering experiment design process, as outlined in “Chaos Engineering: Building Confidence in System Behavior through Experiments.”
  1. Pick a hypothesis
    • Decide what hypothesis you’re going to test and don’t forget that your system includes the humans that are involved in maintaining it.
  2. Choose the scope of the experiment
    • Strive to run experiments in production and minimize blast radius. The closer your test is to production, the more you’ll learn from the results.
  3. Identify the metrics you’re going to watch
    • Try to operationalize your hypothesis using your metrics as much as possible. Be ready to abort early if the experiment has a more serious impact than you expected.
  4. Notify the organization
    • Inform members of your organization about what you’re doing and coordinate with multiple teams who are interested in the outcome and are nervous about the impact of the experiment.
  5. Run the experiment
    • The next step is to run the experiment while keeping an eye on your metrics in case you need to abort it.
  6. Analyze the results
    • Carefully analyze the result of the experiment and feed the outcome of the experiment to all the relevant teams.
  7. Increase the scope
    • Once you gain confidence running smaller-scale experiments, you may want to increase the scope of an experiment to reveal systemic effects that aren’t noticeable with smaller-scale experiments.
  8. Automate
    • The more regularly you run your Chaos Experiments, the more value you can get out of them.
Since some degree of chaos and unpredictability is inevitable, why not embrace it? “The next step is to institutionalize chaos, perhaps by embracing Netflix’s open source Simian Army. But really [embracing Chaos Engineering] is not so much a matter of technology as it is culture. Telling your developers to expect and foster failure as a way to drive resilience into your cloud systems is a big step on the path to engineering in the 21st Century. Time to get started,” concludes Matt Asay his article on the subject.

Conclusion

Chaos Engineering is a remarkably valuable discipline and practice that can help any business or organization build a resilient distributed system capable of withstanding all challenges and adversities it might face. Chaos Engineering can be performed at any scale and any level of automation. Despite its young age, Chaos Engineering has already changed how we think about failure testing, and thanks to companies such as Netflix there’s also a sizable range of Chaos Engineering testing available to anyone who would like to experience first-hand what Chaos Engineering has to offer.

 
  • Continue Reading
  • No Comments

Google ARCore: Augmented Reality for the Masses

Written by Brooks Canavesi on November 2, 2017. Posted in Blog, Mobile App Development, Technology trends

Unlike virtual reality, augmented reality has yet to capture the attention of the average consumer. Last year, the augmented reality market was valued at $2.39 billion, and it’s expected to reach $61.39 billion by 2023. Such a high growth-rate will only be possible if augmented reality comes to the masses, and Google has just announced a new software development kit for augmented reality that might do just that.

ARCore, as Google calls its rich set of tools, frameworks, and APIs, is built on the fundamental technologies that power Tango, the company’s original augmented reality computing platform, but it differs in one crucial way: it doesn’t need any additional hardware to function.

Takes More Than Two to Tango

Google released Tango in 2014, enabling certain smartphones to detect their position relative to the world around them without using GPS or other external signals. Right from the get-go, developers we able to create apps for the platform that integrated motion-tracking, area learning, and depth perception using Tango’s C and Java APIs to access this data in real time.

The main reason why we don’t hear about Tango just a few years after its launch is the terribly low number of smartphones that support it. On Tango’s official website, Google currently (September 2017) lists only two devices: the Lenovo Phab 2 Pro and the Asus ZenFone AR. The former is huge, and the latter doesn’t support FDD-LTE band 12. So, not exactly a great selection.

In the day and age of affordable Chinese brands like Xiaomi, Meizu, and Huawei releasing affordable smartphones with amazing specifications on a steady basis, the average consumer simply has too many other interesting options to even consider buying a specific smartphone just to try augmented reality. In 2017, augmented reality is still just an intriguing toy, not a major selling-point.

“We’ve been developing the fundamental technologies that power mobile AR over the last three years with Tango, and ARCore is built on that work. But, it works without any additional hardware, which means it can scale across the Android ecosystem,” said Dave Burke, Google’s vice-president for Android, in the release statement.

“ARCore will run on millions of devices, starting today with the Pixel and Samsung’s S8, running 7.0 Nougat and above. We’re targeting 100 million devices at the end of the preview. We’re working with manufacturers like Samsung, Huawei, LG, ASUS, and others to make this possible with a consistent bar for quality and high performance,” Burke added.

The goal here is to create a generic augmented reality platform that individual manufacturers can support as much or as little as they want. So far, the only known requirement is a minimum SDK version of Android 7.0 (Nougat). It’s possible that ARCore will, at least to some degree, run even on older versions of Android, but that’s something that still needs to be tested. From the point of view of Android developers, ARCode will be yet another functionality they can use to enrich their apps.

ARCore Versus Other Augmented Reality Platforms

Google is slightly late to the augmented reality party. Apple introduced its augmented reality platform, ARKit, back in June, and third-party developers have already used it to produce a host of clever experiments that anyone with an Apple device with either the A9 or the A10 (or newer) processor can try.

In April, at this year’s F8 keynote, Facebook introduced the company’s augmented reality platform, which focuses on artificial intelligence-powered cameras. “We’re making the camera the first augmented reality platform,” said Zuckerberg.

With so much competition and such high stakes, ARCore needs to give developers exceptional tools and flawless performance to avoid the fate of Tango. On this front, Google focuses on three things: motion tracking, environmental understanding, and light estimation.

Using the combination of Java/OpenGL, Unity, and Unreal, developers can use ARCore to determine both the position and orientation of the phone as it moves to keep virtual objects accurately placed in the real environment. The same points that ARCore uses for motion tracking are also used to keep objects accurately placed on horizontal surfaces, such as a floor or a table. Finally, “ARCore observes the ambient light in the environment and makes it possible for developers to light virtual objects in ways that match their surroundings, making their appearance even more realistic,” explains Burke.

To further support augmented reality development, Google developed Blocks and Tilt Brush. Blocks is a simple 3D modeling tool designed to make creating 3D models as accessible as possible. Artists can share their creations with others and easily use them for their own projects. Tilt Brush is a virtual reality painting application with an intuitive interface. Together with Blocks, Tilt Brush gives developers everything they need to create beautiful assets in a natural and fun way.

“We think the Web will be a critical component of the future of AR, so we’re also releasing prototype browsers for web developers so they can start experimenting with AR, too. These custom browsers allow developers to create AR-enhanced websites and run them on both Android/ARCore and iOS/ARKit,” said Burke.

Search is one area where augmented reality could prove to be tremendously useful, which is a big deal considering that Google is essentially synonymous with search in general. With the help of artificial intelligence, Google could one day be able to overlay assembly instructions on Ikea products, take recipes to a whole new level, or shatter language barriers.

ARCore success now depends on how well the technology will work in practice. Unlike with the company’s previous augmented reality platform or Apple’s ARKit, most Android users will experience ARCore through budget and mid-range devices, with lower resolution cameras and weaker CPUs. Unless ARCore works acceptably well outside the high-end smartphone category, most Android users won’t be interested in new augmented reality apps, and developers thus won’t be interested in making them.

  • Continue Reading
  • No Comments

Wearable Trends at IFA 2017

Written by Brooks Canavesi on October 23, 2017. Posted in Blog, IoT, Technology trends

IFA (Internationale Funkausstellung Berlin) is one of the oldest industrial exhibitions in Germany and the biggest tech show in Europe. This year’s IFA hosted 1,805 exhibitors, who occupied over 1.7M square feet of the sold-out show floor. From ground-breaking innovations to highly anticipated product launches, IFA’s visitors had the opportunity to get a glimpse into the near future of digital lifestyle products in one place, and tech companies were able to demonstrate which trends they see as important.

“Fitness wearables have become the norm in the shortest space of time, providing a perfect example of how quickly technologies can transform our lives,” said Alexander Zeeh, Samsung’s director of home appliances. “This is exactly what our motto for this year’s IFA expresses: ‘the new normal’ … Samsung has been actively working to shape this trend for five years now with new innovations such as our Samsung Gear smartwatches.”

Judging by the large number of product launches of wearable devices at IFA 2017, Samsung isn’t the only company that sees things this way. In fact, the wearables market is predicted to be worth $25 Billion by 2019 by the industry analyst firm CCS Insight. As we go over the most important wearable devices that were launched at IFA 2017, notice that all of them fit into the lifestyle product category, with a clear orientation toward sports and health.

Samsung Gear Fit Pro 2

The Samsung Gear Fit Pro 2 is a relatively thin and narrow smart band with a vibrant, 1.5-inch AMOLED display. The target audience of this product are swimmers, who, along with clumsy individuals, are likely the only people who will benefit from the band’s 5 ATM water resistance, which allows the Gear Fit Pro 2 to withstand depths of up to 50 meters. The band can even automatically detect when it’s underwater and switch to Water Lock Mode to prevent water bubbles from interacting with the touch display.

Of course, Samsung didn’t forget about terrestrial athletes when designing the Gear Fit Pro 2. The built-in GPS sensor provides accurate location tracking, and the heart rate monitor on the back of the band offers continuous heart rate monitoring throughout the day and when exercising or playing sports.

The Gear Fit Pro 2 has a speedy dual-core CPU clocked at 1 GHz, 512 MB of memory, and 4 GB of storage space. It runs on Samsung’s open source operating system based on the Linux kernel, Tizen, and supports Spotify offline playlists. The 200 mAh battery lasts several days in standby mode and one or two days when used moderately often.

The Samsung Gear Fit Pro 2 band is now available for pre-order for $200.

Samsung Gear Sport

While the Samsung Gear Fit Pro 2 will appeal mostly to fitness enthusiasts and people who train every day, the Samsung Gear Sport is Samsung’s latest attempt to capture the smartwatch market as a whole. The design of this smartwatch is a well-executed mix of a timeless, semi-round, metal watch face with a durable and water resistant 20 mm nylon band, which suggests that the Gear Sport isn’t afraid of water. In fact, the smartwatch can survive up to 50 meters deep under water, under the ISO standard 22810:2010.

The Gear Sport helps users make the most out of every opportunity to exercise and reach various fitness goals. “When you’re on an airplane, Gear Sport adjusts accordingly, suggesting stretches that you can do from your seat. When you’re driving, it’s also smart enough to know that you’re focused on the road, not just inactive, so won’t ask you to stretch your muscles,” Samsung explains on its website. Like any good personal trainer, the Gear Sport lets users choose from dozens of workouts, measuring progress with the built-in heart rate sensor.

The Gear Sport also features NFC-based Samsung Pay compatibility for contactless credit and debit card payments. Additional functionality is available in the form of third-party apps, such as Endomondo, MyFitnessPal, and MapMyRun.

The Samsung Gear Sport smartwatch is expected to arrive in October, but its price has yet to be announced.

Fitbit Ionic

When Fitbit acquired Pebble, the Kickstarter-funded smartwatch manufacturer, for $23 million last year, everyone knew that it was only a matter of time before Fitbit released a spiritual successor to Pebble smartwatches. That smartwatch is now here and its name is Fitbit Ionic.

The Ionic smartwatch runs on Fitbit OS, offering full support for third-party applications. Developers can easily create new apps for the Ionic using JavaScript and SVG web standards. Fitbit gives developers access to all sensors the Ionic has, so the possibilities to create interesting apps are virtually limitless.

The Ionic has a built-in GPS and a heart rate sensor, is water resistant, supports contactless payments, and has plenty of storage space for offline music. The smartwatch features a slightly curved touchscreen display with up to 1000 nits of brightness and Corning Gorilla Glass 3 for protection.

Clearly, The Ionic is a premium product, and the steep price of $299.95 makes it the most expensive Fitbit device yet. What’s more, it also makes it more expensive than the Apple Watch Series 1, which could turn out to be a huge problem for Fitbit, a company with a great reputation for its relatively simple activity trackers that start at less than $100.

Invoxia Roadie Tracker

The Invoxia Roadie is an elegant GPS tracking device that doesn’t require a SIM card and has a battery life of up to 8 months. Invoxia sells the Roadie tracker for $99, and the price includes a 3-year network subscription.

What sets the Roadie apart from other compact GPS tracking devices is its ability to combine GPS technology with local wireless networks for maximum precision. This comes in handy, for example, when the Roadie is used to track a piece of luggage as it travels from airport to airport. The Roadie can be easily configured through the official mobile app, supporting location-based notification updates as well as real-time tracking.

Bang & Olufsen Beoplay E8

At IFA 2017, it was apparent that headphones and earbuds are becoming as wireless as they possibly can be. The EarPods from Apple are no longer the only elegant fully wireless earbuds on the market. The Bang & Olufsen Beoplay E8 are smaller than the EarPods, and they promise better sound quality and a comparable battery life.

The Beoplay E8 are controlled by touch, allowing users to change tracks or take phone calls with a simple tap on the earbud. The aluminum construction should offer excellent durability, and the provided leather charging case is just one of many ways how Bang & Olufsen hope to justify the $299 price tag.

Samsung’s Gear IconX

The Gear IconX from Samsung look like ordinary wireless fitness earbuds, but they are actually one foot in the fitness tracker territory. Inside the Gear IconX are a heart rate monitor, an accelerometer, and 4 GB of storage space. Thanks to all this technology, the Gear IconX can, more or less, replace a smart fitness band while also being great at playing music. The internal memory can hold up to 1,000 songs, so no smartphone is needed apart from synchronization.

The Gear IconX have a layer of P2i nano coating for extra water resistance, and they ship with a compact charging case with a 315 mAh battery.

Sony WF-1000X

Bang & Olufsen is betting on premium sound quality, Samsung on fitness tracking features, and Sony on noise cancellation. The WF-1000X feature 6 mm drivers and Adaptive Sound Control to achieve the best possible sound quality with the least amount of background sound. Users can choose between two noise canceling modes: Normal and Voice. The former mode lets through all essential background sounds, which comes in handy when biking or jogging outside, while the latter mode only lets through the sound of human voice.

The Sony WF-1000X earbuds show that wearable devices don’t have to come with revolutionary features to be attractive. Miniaturization of existing technologies can give us wearables that solve already solve problems in a better and more elegant way.

Conclusion

It seems that wearable devices and the fitness world are a match made in heaven. The smartphone is perhaps the main reason why we don’t see more products aimed at people who don’t wear neon running shoes, hi-viz jackets, and yoga pants. If you have large pockets for a smartphone, why would you spend hundreds of dollars on a device that offers only a handful of useful extra features?

Still, smartphones leave plenty of room for single-purpose, low-cost wearables like the Invoxia Roadie Tracker. We just have to wait for the technology to become more affordable, which may take a few more years. For the time being, we will likely continue to see similar products being launched as those at IFA 2017.

  • Continue Reading
  • No Comments

Lean Manufacturing with 4D Printing

Written by Brooks Canavesi on August 29, 2017. Posted in Technology trends

Several leading additive manufacturing research facilities, including MIT’s Self-Assembly Lab, 3D printing manufacturer Stratasys, and 3D software company Autodesk, are taking 3D manufacturing to the next level by creating objects that can change their shape over time in response to various external stimuli.

The new process, called 4D printing, relies mostly on shape-memory polymers, which can form complex structures when exposed to heat, moisture, light, or, for example, kinetic energy. While still in it’s infancy, 4D printing has already been demonstrated by several universities.

Zhen Ding at the Singapore University of Technology and Design and his colleagues are using standard commercial 3D printers to rapidly print rigid 4D objects which change shape when heated to 45°C. Inspired by natural structures like plants, team of scientists at the Wyss Institute for Biologically Inspired Engineering at Harvard University unveiled 4D-printed hydrogel composite structures that change shape when immersed in water.

Skylar Tibbits, a co-director and founder of the Self-Assembly Lab housed at MIT’s International Design Center, “sees all kinds of future applications for 4D printing. They range from sneakers that change how they fit on your feet based on what activities you are doing to how clothing changes composition based on the weather,” reports All3DP.

It will still take some time before the technology moves past the stage of research and development, and it will take even longer before consumers will be able to walk into a store and buy a 4D printer, but the wide range of examples how the technology could be used in the future shows just how much potential it has to transform fields like manufacturing, logistics, healthcare, and others.

  • Continue Reading
  • No Comments

Edge Computing: The Future of The Internet of Things

Written by Brooks Canavesi on August 29, 2017. Posted in IoT, Technology trends

During 2017, Gartner expects the total number of Internet of Things (IoT) devices to grow by 31 percent. Currently, most of these devices are connected to the cloud, but that may change soon as enterprises will find it increasingly difficult to meet the computing demands of modern IoT devices and connected applications.

The cloud, the technology that has made it possible to move processing for large in-house data centers and individual devices to infinitely scalable infrastructures owned by third-party companies, such as Microsoft, Google, and Amazon, isn’t without its limits.

Privacy conscientious enterprises don’t like sending all their data outside their premises with no control over what exactly they send and what they keep to themselves or delete altogether. Developers of data-driven intelligent applications require nearly real-time data processing with a single-digit latency—something that the cloud and the current wireless technologies can’t easily provide.

A new enterprise infrastructure is emerging, and industry experts expect that it will become the most preferred architecture for IoT solutions. “The next big thing for enterprise IT comes in the form of edge computing—a paradigm where compute moves closer to the source of data,” writes Janakiram MSV, an analyst, advisor and an architect at Janakiram & Associates.

“Edge computing is a new paradigm in which substantial computing and storage resources—variously referred to as cloudlets, micro datacenters, or fog nodes—are placed at the Internet’s edge in close proximity to mobile devices or sensors,” explains The Emergence of Edge Computing paper by Mahadev Satyanarayanan from Carnegie Mellon University.

The edge is an exciting place full of sensors, modules, actuators, including GPS receivers, valves, motors, temperature and light sensors, and others. These devices receive instructions from applications running in the cloud, and they, in turn, gather various data, creating a complete feedback loop.

With edge computing, the data gathered by edge devices is sorted into two broad categories: hot and cold. Hot data are critical and should be processed as soon as possible. On the other hand, cold data can be processed with a substantial delay because they contribute only to long-term analytics based on historical trends moves.

Because hot data should be processed instantaneously, it makes sense to leverage the computational power of the edge itself, instead of sending them to a public cloud. The processing could be performed by a smart car, a smartphone, or a home automation system. It will be up to complex event processing engines to decide whether to process data locally or let the cloud infrastructure handle it.

Benefits of Edge Computing

Enterprises currently face many problems when running data-centric workloads in the cloud. Even with a direct fiber optics connection, latency is limited by the speed of light. For systems where a few milliseconds could mean the difference between life and death, such as self-driving cars, edge computing is the obvious way how to minimize latency as much as possible.

When the bulk of data generated by edge devices is processed locally, at the edge, the overall bandwidth demand into the cloud is considerably lower. Security video cameras tend to be extremely bandwidth-demanding even though the video footage they capture is usually stored only for a few hours and rarely seen by a human being. This video footage could be stored and analyzed locally, with only metadata being sent to the cloud.

Because edge computing allows enterprises to retain sensitive data on-premises for as long as they want, it addresses growing concerns over data privacy arising from IoT system centralization. It would be up to each enterprise to set privacy policies that govern the release of the data to the cloud.

Finally, the ability of the edge to function independently of the cloud makes it much more resilient to network outages and malicious denial-of-service attacks. As more cities around the world become smarter than ever, they will have to make security one of their priorities to ensure safety and privacy of their residents.

Conclusion

The benefits of edge computing are numerous, but so are the technical challenges. The current edge comprises of countless devices with distinct roles, and managing them in a centralized way seems almost impossible. More realistically, a new generation of connected devices will emerge and make the old generation obsolete.

This presents us with the classic chicken and egg problem: How to convince companies to develop solutions for an infrastructure that’s not here yet. Just like with the web, the technology itself will have to be sufficiently appealing on its own to attract enough early adopters to reach a certain critical mass.

  • Continue Reading
  • No Comments
  • 1
  • ...
  • 3
  • 4
  • 5
  • 6
  • ...
  • 10

Blog Categories

  • Software & App Sales
    • Sales Strategy
    • Sales Management
  • Mobile App Development
    • User Experience & Interface Design
    • Technology trends
  • Technology Tips & Tricks
  • Personal

Tags

Fill Rate CTR boating icloud ios bigdata robotics ai hearables google cloud azure app dev smart home augmented reality smartdevices fitness virtual reality vr security mobility mobile mobile app mobile apps mobile application development wearables smart devices enterprise mobility ar 5g Xamarin Internet of things microsoft xiaomi smartglasses smartphone hud cellular design ipad wakeboarding 2005 eCPM in-app purchasing

  • Home
  • Blog
  • Contact
  • Home
  • Blog
  • Contact