Brooks Canavesi Logo
  • Home
  • Blog
  • Contact
Brooks Canavesi Logo

  • Home
  • Blog
  • Software & App Sales

At-Home DNA Tests and Growing Privacy Concerns

Written by Brooks Canavesi on November 5, 2018. Posted in Blog, Software & App Sales, Technology Tips & Tricks

Direct to consumer genetic testing companies, such as 23andMe, AncestryDNA, MyHeritage DNA, and Living DNA, have convinced millions of people to place their genetic material, typically a saliva sample, in an envelope and send it for analysis. In 2017, the genetic testing market was worth approximately $99 million in 2017, and it’s estimated that it will be worth $310 by 2022.

However, not everyone is as thrilled about the growing popularity of direct to consumer genetic testing as the companies that profit from it. “The key thing about your genetic data…it is uniquely yours. It identifies you, so if you are going to entrust it to a company, you should try to understand what the consequences are,” said Jennifer King, director of consumer privacy at Stanford Law School’s Center for Internet and Society.

While genetic testing companies have plenty good reasons to protect the genetic data of their customers—their business depends on consumer trust, after all—cybercriminals are experts at finding ways how to circumvent even the most state-of-the-art cyber defenses.

This was demonstrated by an unknown hacker on October 26, 2017, which was the date when MyHeritage, an online genealogy service that was first developed and popularized by the Israeli company MyHeritage in 2003, was breached, leaking email addresses and hashed passwords of more than 92 million users who signed up for the services until the date of the breach.

The company didn’t disclose the breach until June 4, 2018, and it did so only after a security researcher reported finding a file that contained email addresses and hashed passwords on a private server. “Our Information Security Team received the file from the security researcher, reviewed it, and confirmed that its contents originated from MyHeritage and included all the email addresses of users who signed up to MyHeritage up to October 26, 2017, and their hashed passwords,” said MyHeritage in its statement.

“We determined that the file was legitimate and included the email addresses and hashed passwords of 92,283,889 users who had signed up to MyHeritage up to and including Oct 26, 2017 which is the date of the breach. MyHeritage does not store user passwords, but rather a one-way hash of each password, in which the hash key differs for each customer. This means that anyone gaining access to the hashed passwords does not have the actual passwords.”

The breach of MyHeritage has served as a powerful reminder of the fact that consumers are not focusing on privacy nearly as much as they should be. The next data breach could be far more serious, and there are many ways how genetic data could be exploited. For instance, insurance companies could use it to deny health insurance coverage for consumers with genetic predispositions to certain medical conditions.

To prevent this from happening, genetic testing companies must ensure that a data breach similar to the one that affected MyHeritage won’t happen again, and consumers must educate themselves on the privacy implications of sharing their genetic data with genetic testing companies and their partners.

  • Continue Reading
  • No Comments

Chaos Engineering: Breaking Things on Purpose

Written by Brooks Canavesi on November 5, 2017. Posted in Blog, Mobile App Development, Software & App Sales, Technology trends

Modern distributed systems, especially within the realm of cloud computing, have become so complex and unpredictable that it’s no longer feasible to reliably identify all the things that can go wrong. From bad configuration pushes to hardware failures to sudden surges in traffic with unexpected results, the number of possible failures is too large for flawless distributed systems to exist. If perfection is unattainable, what else is there to strive for? Resiliency.

“The cloud is all about redundancy and fault-tolerance. Since no single component can guarantee 100 percent uptime (and even the most expensive hardware eventually fails), we have to design a cloud architecture where individual components can fail without affecting the availability of the entire system. In effect, we have to be stronger than our weakest link,” explains Netflix.

To know with certainty that a failure of an individual component won’t affect the availability of the entire system, it’s necessary to experience the failure in practice, preferably in a realistic and fully automated manner. When a system has been tested for a sufficient number of failures, and all the discovered weaknesses have been addressed, such system is very likely resilient enough to survive use in production.

“This was our philosophy when we built Chaos Monkey, a tool that randomly disables our production instances to make sure we can survive this common type of failure without any customer impact,” says Netflix, whose early experiments with resiliency testing in production have given birth to a new discipline in software engineering: Chaos Engineering.

What Is Chaos Engineering?

The core idea behind Chaos Engineering is to break things on purpose to discover and fix weaknesses. Chaos Engineering is defined by Netflix, a pioneer in the field of automated failure testing and the company that originally formalized Chaos Engineering as a discipline in the Principles of Chaos Engineering, as the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.

Chaos Engineering acknowledges that we live in an imperfect world where things break unexpectedly and often catastrophically. Knowing this, the most productive decision we can make is to accept this reality and focus on creating quality products and services that are resilient to failures.

Mathias Lafeldt, a professional infrastructure developer who’s currently working remotely for Gremlin Inc., says, “Building resilient systems requires experience with failure. Waiting for things to break in production is not an option. We should rather inject failures proactively in a controlled way to gain confidence that our production systems can withstand those failures. By simulating potential errors in advance, we can verify that our systems behave as we expect—and to fix them if they don’t.”

In doing so, we’re building systems that are antifragile, which is a term borrowed from Nassim Nicholas Taleb’s 2012 book titled “Antifragile: Things That Gain from Disorder.” Taleb, a Lebanese-American essayist, scholar, statistician, former trader, and risk analyst, introduces the book by saying, “Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.”

On his blog, Lafeldt gives another example of antifragility, “Take the vaccine—we inject something harmful into a complex system (an organism) in order to build an immunity to it. This translates well to our distributed systems where we want to build immunity to hardware and network failures, our dependencies going down, or anything that might go wrong.”

Just like with vaccination, the exposition of a system to volatility, randomness, disorder, and stressors must be executed in a well-thought-out manner that won’t wreak havoc on it should something go wrong. Automated failure testing should ideally start with the smallest possible impact that can still teach something and gradually become more impactful as the tested system becomes more resilient.

The Five Principles of Chaos Engineering

“The term ‘chaos’ evokes a sense of randomness and disorder. However, that doesn’t mean Chaos Engineering is something that you do randomly or haphazardly. Nor does it mean that the job of a chaos engineer is to induce chaos. On the contrary: we view Chaos Engineering as a discipline. In particular, we view Chaos Engineering as an experimental discipline,” state Casey Rosenthal, Lorin Hochstein, Aaron Blohowiak, Nora Jones, and Ali Basiri in “Chaos Engineering: Building Confidence in System Behavior through Experiments.”

In their book, the authors propose the following five principles of Chaos Engineering:

Hypothesize About Steady State

The Systems Thinking community uses the term “steady state” to refer to a property where the system tends to maintain that property within a certain range or pattern. In terms of failure testing, the normal operation of the tested system is the system’s steady state, and we can determine what constitutes as normal based on a number of metrics, including CPU load, memory utilization, network I/O, how long it takes to service web requests, or how much time is spent in various database queries, and so on.

“Once you have your metrics and an understanding of their steady state behavior, you can use them to define the hypotheses for your experiment. Think about how the steady state behavior will change when you inject different types of events into your system. If you add requests to a mid-tier service, will the steady state be disrupted or stay the same? If disrupted, do you expect the system output to increase or decrease?” ask the authors.

Vary Real-World Events

Suitable events for a chaos experiment include all events that are capable of disrupting steady state. This includes hardware failures, functional bugs, state transmission errors (e.g., inconsistency of states between sender and receiver nodes), network latency and partition, large fluctuations in input (up or down) and retry storms, resource exhaustion, unusual or unpredictable combinations of inter-service communication, Byzantine failures (e.g., a node believing it has the most current data when it actually does not), race conditions, downstream dependencies malfunction, and others.

“Only induce events that you expect to be able to handle! Induce real-world events, not just failures and latency. While the examples provided have focused on the software part of systems, humans play a vital role in resiliency and availability. Experimenting on the human-controlled pieces of incident response (and their tools!) will also increase availability,” warn the authors.

Run Experiments in Production

Chaos Engineering prefers to experiment directly on production traffic to guarantee both authenticity of the way in which the system is exercised and relevance to the currently deployed system. This goes against the commonly held tenet of classical testing, which strives to identify problems as far away from production as possible. Naturally, one needs to have a lot of confidence in the tested system’s resiliency to the injected events. The knowledge of existing weaknesses indicates a lack of maturity of the system, which needs to be addressed before conducting any Chaos Engineering experiments.

“When we do traditional software testing, we’re verifying code correctness. We have a good sense about how functions and methods are supposed to behave, and we write tests to verify the behaviors of these components. When we run Chaos Engineering experiments, we are interested in the behavior of the entire overall system. The code is an important part of the system, but there’s a lot more to our system than just code. In particular, state and input and other people’s systems lead to all sorts of system behaviors that are difficult to foresee,” write the authors.

Automate Experiments to Run Continuously

Automation is a critical pillar of Chaos Engineering. Chaos engineers automate the execution of experiments, the analysis of experimental results, and sometimes even aspire to automate the creation of new experiments. That said, one-off manual experiments are a good place where to start with failure testing. After a few batches of carefully designed manual experiments, the next natural level we can aspire to is their automation.

“The challenge of designing Chaos Engineering experiments is not identifying what causes production to break, since the data in our incident tracker has that information. What we really want to do is identify the events that shouldn’t cause production to break, and that have never before caused production to break, and continuously design experiments that verify that this is still the case,” the authors emphasize what to pay attention to when designing automated experiments.

Minimize Blast Radius

It’s important to realize that each chaos experiment has the potential to cause real damage. The difference between a badly designed chaos experiment and a well-designed chaos experiment is in the blast radius. The most basic way how to minimize the blast radius of any chaos experiment is to always have an emergency stop mechanism in place to instantly shut down the experiment in case it goes out of control. Chaos experiments should be built upon each other by taking careful, measured risks that gradually escalate the overall scope of the testing without causing unnecessary harm.

“The entire purpose of Chaos Engineering is undermined if the tooling and instrumentation of the experiment itself cause an undue impact on the metric of interest. We want to build confidence in the resilience of the system, one small and contained failure at a time,” caution the authors in the book.

Chaos at Netflix

Netflix has been practicing some form of resiliency testing in production ever since the company began moving out of data centers into the cloud in 2008. The first Chaos Engineering tool to gain fame outside Netflix’s offices was Chaos Monkey, which is currently in version 2.0.

“Years ago, we decided to improve the resiliency of our microservice architecture. At our scale, it is guaranteed that servers on our cloud platform will sometimes suddenly fail or disappear without warning. If we don’t have proper redundancy and automation, these disappearing servers could cause service problems. The Freedom and Responsibility culture at Netflix doesn’t have a mechanism to force engineers to architect their code in any specific way. Instead, we found that we could build strong alignment around resiliency by taking the pain of disappearing servers and bringing that pain forward. We created Chaos Monkey to randomly choose servers in our production environment and turn them off during business hours,” explains Netflix.

The rate at which Chaos Monkey turns off servers is higher than the rate at which server outages happen normally, and Chaos Monkey is configured to turn off servers during production hours. Thus, engineers are forced to build resilient services through automation, redundancy, fallbacks, and other best practices of resilient design.

While previous versions of Chaos Monkey were additionally allowed to perform actions like burning up CPU and taking storage devices offline, Netflix uses Chaos Monkey 2.0 to only terminate instances. Chaos Monkey 2.0 is fully integrated with Netflix’s open source multi-cloud continuous delivery platform, Spinnaker, which is intended to make it easy to extend and enhance cloud deployment models. The integration with Spinnaker allows service owners to set their Chaos Monkey 2.0 configs through the Spinnaker apps, and Chaos Monkey 2.0 to get information about how services are deployed from Spinnaker.

Once Netflix realized the enormous potential of breaking things on purpose to rebuild them better, the company decided to take things to the next level and move from the small scale to the very large scale with the release of Chaos Kong in 2013, a tool capable of testing how their services behave when a zone or an entire region is taken down. According to Nir Alfasi, a Netflix engineer, the company practices region outages using Kong almost every month.

“What we need is a way to limit the impact of failure testing while still breaking things in realistic ways. We need to control the outcome until we have confidence that the system degrades gracefully, and then increase it to exercise the failure at scale. This is where FIT (Failure Injection Testing) comes in,” stated Netflix in early 2014, after realizing that they need a finer degree of control when deliberately breaking things than their existing tool allowed for at the time. FIT is a platform designed to simplify the creation of failure within Netflix’s ecosystem with a greater degree of precision. FIT also allows Netflix to propagate its failures across the entirety of Netflix in a consistent and controlled manner. “FIT has proven useful to bridge the gap between isolated testing and large-scale chaos exercises, and make such testing self-service.”

Once the Chaos Engineering team at Netflix believed that they had a good story at small scale (Chaos Monkey) and large scale (Chaos Kong) and in between (FIT), it was time to formalize Chaos Engineering as a practice, which happened in mid-2015 with the publication of the Principles of Chaos Engineering. “With this new formalization, we pushed Chaos Engineering forward at Netflix. We had a blueprint for what constituted chaos: we knew what the goals were, and we knew how to evaluate whether or not we were doing it well. The principles provided us with a foundation to take Chaos Engineering to the next level,” write Casey Rosenthal, Lorin Hochstein, Aaron Blohowiak, Nora Jones, and Ali Basiri in “Chaos Engineering: Building Confidence in System Behavior through Experiments.”

The latest notable addition to Netflix’s Chaos Engineering family of tools is ChAP (Chaos Automation Platform), which was launched in late 2016. “We are excited to announce ChAP, the newest member of our chaos tooling family! Chaos Monkey and Chaos Kong ensure our resilience to instance and regional failures, but threats to availability can also come from disruptions at the microservice level. FIT was built to inject microservice-level failure in production, and ChAP was built to overcome the limitations of FIT so we can increase the safety, cadence, and breadth of experimentation,” introduced Netflix their new failure testing automation tool.

Although Netflix isn’t the only company interested in Chaos Engineering, their willingness to develop in the open and share with others has had a profound influence on the industry. Besides regularly speaking at various industry events, Netflix’s GitHub page contains a wealth of interesting open source projects that are ready for adoption.

Chaos Engineering is also being embraced by Etsy, Microsoft, Jet, Gremlin, Google, and Facebook, just to name a few. These and other companies have developed a comprehensive range of open source tools for different use cases. The tools include Simoorg (LinkedIn’s own failure inducer framework), Pumba (a chaos testing and network emulation tool for Docker), Chaos Lemur (self-hostable application to randomly destroy virtual machines in a BOSH-managed environment), and Blockade (a Docker-based utility for testing network failures and partitions in distributed applications), just to name a few.

Learn to Embrace Chaos

If you now feel inspired to embrace the above-described principles and the tool to create your own Chaos Engineering experiments, you may want to adhere to the following Chaos Engineering experiment design process, as outlined in “Chaos Engineering: Building Confidence in System Behavior through Experiments.”
  1. Pick a hypothesis
    • Decide what hypothesis you’re going to test and don’t forget that your system includes the humans that are involved in maintaining it.
  2. Choose the scope of the experiment
    • Strive to run experiments in production and minimize blast radius. The closer your test is to production, the more you’ll learn from the results.
  3. Identify the metrics you’re going to watch
    • Try to operationalize your hypothesis using your metrics as much as possible. Be ready to abort early if the experiment has a more serious impact than you expected.
  4. Notify the organization
    • Inform members of your organization about what you’re doing and coordinate with multiple teams who are interested in the outcome and are nervous about the impact of the experiment.
  5. Run the experiment
    • The next step is to run the experiment while keeping an eye on your metrics in case you need to abort it.
  6. Analyze the results
    • Carefully analyze the result of the experiment and feed the outcome of the experiment to all the relevant teams.
  7. Increase the scope
    • Once you gain confidence running smaller-scale experiments, you may want to increase the scope of an experiment to reveal systemic effects that aren’t noticeable with smaller-scale experiments.
  8. Automate
    • The more regularly you run your Chaos Experiments, the more value you can get out of them.
Since some degree of chaos and unpredictability is inevitable, why not embrace it? “The next step is to institutionalize chaos, perhaps by embracing Netflix’s open source Simian Army. But really [embracing Chaos Engineering] is not so much a matter of technology as it is culture. Telling your developers to expect and foster failure as a way to drive resilience into your cloud systems is a big step on the path to engineering in the 21st Century. Time to get started,” concludes Matt Asay his article on the subject.

Conclusion

Chaos Engineering is a remarkably valuable discipline and practice that can help any business or organization build a resilient distributed system capable of withstanding all challenges and adversities it might face. Chaos Engineering can be performed at any scale and any level of automation. Despite its young age, Chaos Engineering has already changed how we think about failure testing, and thanks to companies such as Netflix there’s also a sizable range of Chaos Engineering testing available to anyone who would like to experience first-hand what Chaos Engineering has to offer.

 
  • Continue Reading
  • No Comments

BEACON TECHNOLOGY AND MOBILE MARKETING

Written by Brooks Canavesi on July 8, 2016. Posted in Blog, Mobile App Development, Software & App Sales, Uncategorized

If you live in a first-world country, chances are that most of your daily activity takes place indoors. Consequently, it might not be possible to use GPS to get accurate locational information. Beacons are a low-cost piece of hardware powered by Bluetooth Low Energy (BLE). Their main purpose is to provide an inexpensive way how to accurately target individual smartphone or tablet users and send messages or prompts directly to their devices.

Even though they are still in their infancy, ABI Research estimates suggest 3.9 million BLE beacons shipped globally in 2015. That’s because retailers, manufacturers, hotels, educational institutions, and governments see how transformative they could be for logistics, customer engagement, and information transmission.

Companies like Zebra are leading the way with innovative products like MPACT.  Zebra’s marketing site states “MPact is the only indoor locationing platform to unify Wi-Fi and Bluetooth® Smart technology, improving locationing accuracy, while allowing you to connect to the most possible customers and capture more analytics and insight. Service is re-defined through impactful interactions with customers via the one device they almost always have in hand – their mobile phone. The result? Instant visibility into where customers are in your facility – and the ability to automatically take the best action to best serve each customer at any time during their visit.”

According to ZDNet, the largest retail deployment of beacons to date was carried out by drug store chain Rite Aid. The company recently announced a distribution of proximity beacons in each of its 4,500 U.S. stores.

Statistics from Swirl, Mobile Presence Management and Marketing Platform, explain why: Relevant mobile offers delivered to smartphones while shopping in a store would significantly influence likelihood to make a purchase for 72% of consumers. What’s more, 80% of consumers would welcome the option to use a mobile app while shopping in a store if that app delivered relevant sales and promotional notifications. That’s a staggering improvement when compared to traditional push notifications, which are opened only about 14 percent of the time, according to mobile advertising firm Beintoo.

As more retailers implement beacons to offer flash sales, provide customers with more product information, and speed up the checkout process, we can expect a dramatic rise in the rate of their adoption. A report from BI Intelligence says that “US in-store retail sales influenced by beacon-triggered messages will see a nearly tenfold increase between 2015 and 2016, from $4.1 billion to $44.4 billion.”

Mobile marketers and developers will have to learn new tricks to fully capitalize on the wealth of opportunities that the beacon technology presents.
  • Continue Reading
  • No Comments

The Internet of Things and the (R)Evolution of Manufacturing

Written by Brooks Canavesi on May 22, 2016. Posted in Mobile App Development, Software & App Sales, Technology trends

Manufacturing is about to undergo a transformation that could have similar consequences as the Industrial Revolution, which took place from the 18th to 19th centuries and completely changed the face of, up to that time, rural Europe and America. That’s because the Internet of Things and smart manufacturing can create the perfect decision-making environment and help companies of all sizes optimize all aspects of their operations and maximize their revenue, as illustrated by King’s Hawaiian, producers of frozen entrees in a bowl as well as Hawaiian bread. The company managed to put out extra 180,000 pounds of bread every day, thus effectively doubling their previous production, as reported by Forbes.

The same story of success can also be told by General Electric. More than 10,000 sensors on their Durathon battery factory in Schenectady provides the company with non-stop stream of data. Using cutting-edge statistical approaches and Big Data analysis, General Electric can get an instant overview of their entire production and tweak it as they see fit. According to Industry Week, the Siemens’ electronics manufacturing plant in Amberg, Germany uses around 1,000 controllers to handle up to 75 percent of the value chain autonomously.

Given these fascinating examples, it may come as a surprise that “only 10 percent of industrial operations are currently using the connected enterprise,” according to John Nesi, vice president of market development at Rockwell Automation. What’s more, apparently, one in five factories today are completely cut off from the Internet, as discovered by SCM World’s recent survey.

However, this number is expected to drop to near zero in just the next five years, resulting in almost 50 billion connected endpoints. It won’t take a long time before every single instrument, machine, and part is aware of all other parts around it.

  • Continue Reading
  • No Comments

Benefits and Disadvantages of Hybrid Mobile Applications

Written by Brooks Canavesi on May 15, 2016. Posted in Mobile App Development, Software & App Sales, Technology trends, Uncategorized

Mobile marketing has become one the most important, if not the most important, parts of just about any marketing strategy. People rely on their mobile devices for just about any activity imaginable and any company that is not a part of this global trend seems to be out of touch. Traditionally, there were two main ways how to establish a mobile presence: one was to create a fully native application written in a programming language used by the targeted platform, and the other was to stick with a regular website and give up upon the native feel and look. However, now, in 2016, we have reached the point where more than 50 percent of mobile applications should be hybrid, according to Gartner’s 2013 mobile and wireless predictions.

With the imminent market domination of hybrid applications ahead of us, now is a great time to look at their benefits to see what exactly is behind their popularity. We, also, won’t avoid mentioning their main negatives, in order to get a clear, comprehensive picture of their role in the mobile market.

What are Hybrid Mobile Applications?

Let’s start with a brief background: native applications are built using a platform-specific programming language (Objective-C for iOS and Java for Android) and can use all native functionality of mobile devices and mobile operating systems, including the use of GPS, access to the filesystem, or common user interface elements. As a result, they usually have a consistent user experience, offer great performance, and are tied to just a single environment they were developed for.

One could say that hybrid applications actually have more in common with web apps than native apps. The reason is that they are actually just web apps wrapped in a native web view displayed via the smartphone’s native browser. What makes them so special is the particular framework using which they are built. This framework allows for an easy use native functions of each mobile platform using cross-platform APIs. Frameworks like Cordova require nothing more than a knowledge of HTML, CSS, and JavaScript, tools which are very familiar to all web developers.

Main Benefits of Hybrid Mobile Apps

With the introduction behind us, it’s time to take a closer look at some of the main benefits of hybrid mobile apps. We are not trying to include every single positive aspect of hybrid apps; instead, we are focusing solely on their advantage over native and web applications.

Unified Development

By far the single biggest benefit that hybrid mobile apps can offer is the unified development. Companies can save a substantial amount of money that would otherwise have to be spent on developing and maintaining separate code bases for different mobile platforms. They can develop just a single version and let their hybrid framework of choice do the heavy lifting and ensure that everything will work flawlessly.

This, of course, directly leads to lower cost of development and, potentially, greater revenue. Many small businesses wouldn’t be able to afford to target all major mobile platforms, if there wasn’t the option to do so with a hybrid framework.

Fast Deployment

The Minimum Viable Product (MVP) approach necessitates the fast deployment of functional solutions in order to be the first to penetrate the market and gain a substantial competitive advantage. Those who need to have their app in the App Store as fast as possible should seriously consider using hybrid applications.

Low-Level Access

Basic web applications are cut off from smartphones’ operating systems and built-in functionality. Even though they are getting smarter every day, they still don’t come anywhere near native applications. Hybrid applications elegantly bridge the gap between the two other approaches and provide all the extra functionality with very little overhead. As a result, developers can realize the much wider range of ideas and capture the attention of their target audience.

Offline Support

Web applications are critically limited by their lack of offline support. This may seem like a less important issue for people who live in urban areas, where the access to high-speed Internet access is ubiquitous, but potential customers from rural areas and less developed countries could be cut off from access to the application. At the end of the day, one customer survey showed that 79 percent of consumers would retry a mobile app only once or twice if it failed to work the first time, and only 16 percent of consumers would give it more than two attempts. Local storage can also dramatically enhance the overall user experience by storing personal information and preferences for later use.

Scaling

Hybrid applications are limited only by the underlying framework. Companies who partner with a good provider can instantly target all major platforms without any additional effort at all. It the platform is popular enough, it can be expected that it will quickly add support for any new mobile operating systems and their respective incremental updates.

Main Disadvantages of Hybrid Mobile Apps

It would be unfair to ignore the main disadvantages of hybrid applications and paint an unrealistic picture that doesn’t tell the whole story. Because as much as hybrid apps can help small and medium sized business reach wide audiences, they are also limited in several critical ways.

Performance

Hybrid apps add an extra layer between the source code and the target mobile platform: the particular hybrid mobile framework, such as Ionic, Cordova, Onsen, Kendo, and many others. The unsurprising result is a possible loss of performance. It really varies from application to application just how noticeable the difference can be, but the fact that Facebook migrated their mobile application from HTML5 to native shows that there really can be a significant difference, at least for large-scale applications. Mark Zuckerberg even went on to say that “The biggest mistake we’ve made as a company is betting on HTML5 over native.”

After all, 84 percent of users consider performance to be an important or very important factor, according to A Global Study of Consumers’ Expectations and Experiences of Mobile Applications by Dynatrace, an American application performance management (APM) software company with products aimed at the information technology departments and digital business owners of medium and large businesses.

Debugging

That extra layer also makes debugging a potential nightmare. Developers have to rely on the framework itself to play nicely with the targeted operating system and not introduce any new bugs. Since developers are not likely to have a deep knowledge of the targeted platform, figuring out the exact cause of an issue can be a lengthy affair.

Features

It’s hard to believe that the first iPhone was released just in 2007. We have come a such a long way since then, and the mobile industry is showing no signs of slowing down. Mobile operating systems keep evolving at much faster pace than their desktop counterparts, and many people now use smartphones and tablets as their primary computing devices.

Companies who want to stand at the very apex of progress and use all the latest and greatest features and hardware capabilities are probably going to experience difficulties trying to achieve their goals using hybrid frameworks. It can take quite a bit of time before new features are implemented by providers of these providers of these frameworks.

Conclusion

Hybrid mobile applications have their place in every situation where fast development is the main priority or where the high cost of targeting each platform with an individual native application would be downright prohibitive. Big players and companies who need to stay on top of the latest development are not likely to sacrifice performance and control. However, it may be just a matter of time before hybrid application frameworks reach such a high level of maturity that all previously mentioned negatives will simply disappear.

  • Continue Reading
  • No Comments
  • 1
  • 2

Blog Categories

  • Software & App Sales
    • Sales Strategy
    • Sales Management
  • Mobile App Development
    • User Experience & Interface Design
    • Technology trends
  • Technology Tips & Tricks
  • Personal

Tags

Fill Rate CTR boating icloud ios bigdata robotics ai hearables google cloud azure app dev smart home augmented reality smartdevices fitness virtual reality vr security mobility mobile mobile app mobile apps mobile application development wearables smart devices enterprise mobility ar 5g Xamarin Internet of things microsoft xiaomi smartglasses smartphone hud cellular design ipad wakeboarding 2005 eCPM in-app purchasing

  • Home
  • Blog
  • Contact
  • Home
  • Blog
  • Contact