Brooks Canavesi Logo
  • Home
  • Blog
  • Contact
Brooks Canavesi Logo

  • Home
  • mobile app

Posts Tagged ‘mobile app’

In-Display Fingerprint Sensors Are Here

Written by Brooks Canavesi on February 25, 2018. Posted in IoT, Mobile App Development, Technology trends

At CES 2018, which took place January 9–12 in Las Vegas, Chinese technology company Vivo unveiled the first smartphone with an in-display fingerprint sensor: the Vivo X20 Plus UD, a result of a collaboration between Vivo and California-based developer of touchpads and sensors Synaptics.

Brief History of In-Display Fingerprint Sensors

In-display fingerprint sensors have been in the making for many years. Synaptics first debuted a fingerprint sensor capable of scanning through display glass, the Natural ID FS9100, in late 2016, and many have expected the sensor to appear in a smartphone from Apple or Samsung in 2017. Synaptics VP of marketing Anthony Gioeli said in a press release, “Synaptics’ FS9100 family of fingerprint sensors represent a new breed of optical fingerprint sensor technology that is designed to meet the needs of mobile devices, including the ability to image through thick 2.5D glass. In addition to opening the door to new industrial design options, it enables OEMs to provide highly durable, button-free cover glass and more easily provide water-resistant products while eliminating low yield glass processing.” But contrary to the expectations of many, in September 2017, Apple software engineering chief Craig Federighi said in an interview that the Apple sees Face ID as the future of biometric authentication and wants to move away from Touch ID and other possible forms of fingerprint-based authentication. Because the Samsung Galaxy S8, which was released in April 2017, also featured a camera-based authentication mechanism, an iris scanner, it seemed for a while that the time of fingerprint sensors has passed. That changed in December 2017 with Synaptics’ announcement which stated that the company had begun mass production with a top five OEM of its new Clear ID FS9500 family of optical in-display fingerprint sensors. “Designed for smartphones with infinity displays, Synaptics’ Clear ID in-display fingerprint sensors magically activate in the display only when needed. Clear ID is faster than alternative biometrics such as 3D facial, highly-secure with SentryPoint technology, and very convenient with one-touch/one-step biometric authentication directly in the touchscreen display area of smartphones.” While Synaptics did not specify which smartphone manufacturers they were working with, the most likely suspects were Samsung, Oppo, Vivo, and Huawei, and we now know that Vivo was definitely among them. Vivo X20 Plus UD Apart from its innovative fingerprint reader, the Vivo X20 Plus UD is not radically different from other high-end Android smartphones. It features a 6.43-inch 2160 x 1080 18:9 OLED display, which is a prerequisite for the FS9500 optical in-display fingerprint sensor to work. The smartphone also has a Qualcomm Snapdragon 660 processor with 4 GB of RAM, dual 12-megapixel cameras, and a 3,905mAh battery. The Vivo X20 Plus UD is currently available only for the Chinese market. From the user’s perspective, the in-display fingerprint sensor that’s completely hidden underneath the OLED display works just like a regular capacitive fingerprint sensor. “The mechanics of setting up your fingerprint on the phone and then using it to unlock the device and do things like authenticate payments are the same as with a traditional fingerprint sensor,” commented Vlad Savov for The Verge. “The only difference I experienced was that the Vivo handset was slower — both to learn the contours of my fingerprint and to unlock once I put my thumb on the on-screen fingerprint prompt—but not so much as to be problematic.”

How In-Display Fingerprint Sensors work?

Unlike capacitive fingerprint sensors, which generate an image of the ridges and valleys that make up a fingerprint using electrical current, optical fingerprint sensors, like the FS9500 optical in-display fingerprint sensor from Synaptics, use light to capture an image of a fingerprint. The captured image is then analyzed by an advanced artificial intelligence fingerprint recognition algorithm trained to recognize over 300 distinctive characteristics of fingerprints. In a press release from December 12, 2017, Synaptics boasts that the FS9500 sensor comes with several unique and highly secure authentication features, including Quantum Matcher for adaptive fingerprint template matching and authentication, PurePrint to distinguish between spoofs and actual fingers, and SecureLink to combine support for TLS protocol with ECC authentication and AES encryption.

Expect Smartphones with In-Display Fingerprint Sensors

“Consumers prefer fingerprint authentication on the front of the phone, and with the industry quickly shifting to bezel-free OLED infinity displays, the natural placement of the fingerprint sensor is in the display itself,” said Kevin Barber, senior vice president and general manager, Mobile Division, Synaptics. Even though the Samsung Galaxy S9 won’t likely feature an in-display fingerprint sensor, a patent filed by the company was recently spotted by TechTastic, showing a handset with a fingerprint scanner built into the display. This means that the Samsung Galaxy Note 9, or some other future smartphone from Samsung, will likely come with an in-display fingerprint sensor, and other smartphone manufacturers are going to follow suit.
  • Continue Reading
  • No Comments

Top 3 Pitfalls of Mobile Application Development

Written by Brooks Canavesi on January 3, 2017. Posted in Blog, Mobile App Development

A lucid mobile strategy plays a vital role in marketing efforts and businesses’ overall chances of success. With more than 50 percent of global internet users accessing the internet via mobile, a humble app serves as a gateway to the global market, inviting hundreds of millions of potential customers to visit your business or try out your service or product.

However, mobile development is a complex process with many pitfalls that pose a grave danger for those who come unprepared. The average app can cost $40,000 – $100,000+ and takes almost 18 weeks to publish. Those two figures alone should be enough to convince you that it pays off to properly plan your mobile application development and familiarize yourself with all top 3 pitfalls of mobile application development described in this article.

1. Over-Promising

Thousands of new mobile apps are being released every single day, with only a handful of them ever gaining any significant traction. Backed by venture capital, overzealous developers often promise features they, at the end of the day, can’t or are unable to implement.

Users have gotten used to this and developed a keen intuition for spotting empty promises, and companies who indulge in them get mercilessly bashed on social media sites like Reddit and Twitter. The last thing you want is a negative marketing even before you have a chance to launch your app.

The solution is, as always, simple: focus your marketing efforts on the key features of your app and, if you can, over-deliver on launch. It’s always better to let users discover what extra things the app can do rather than risking bad reviews. You can always introduce additional features with subsequent updates, which also serves as a nice segue into the next point on the list. The term MVP – Minimal Viable Product is tossed around the development industry frequently, however challenge yourself to consider a CVP – Commercially Viable Product.  A CVP is required to compete in a fast-paced industry where STM (Speed To Market) can make or break an idea or company.

2. Letting Your Apps Sit Stale

Many see the release date as a finish line after a long and often strenuous run. In reality, the initial development phase is more akin to a months-long preparation before stepping on the stage of the Mr. Olympia competition. A few slip ups and your entire effort could go to waste. And if you want to enter the stage again the next year, you better do what you can to become even stronger.

Both the Google Play Store and Apple’s App Store have many long-term apps that have remained popular to this very day. The continuous effort of their developers has been helping them keep pace with OS updates, releases of new design guidelines, and requests from the users.

On the other hand, both app stores are filled with apps that were once incredibly popular but fell into obscurity due to lack of updates. App companies need to budget for continuous improvements and updates as Apple and Google can update their respective OS’s without much warning.  Failing to comply could render your app useless.  In order to stay relevant, expect to spend money on ongoing app maintenance and new features. More often than not, simply listening to what your users say about the app is enough to give you a good general sense of direction as to where your app should be heading.

3. Spreading Yourself Too Thin

If you look at Wikipedia’s list of mobile phone features, you’ll quickly realize just how much has mobile phone development evolved over the past 10 years. And if you consider where the industry is heading, it will become clear that the average mobile app of today will likely have little in common with the average mobile app released another 10 years in the future.

The growing complexity of mobile app development boils down to one thing: it’s impossible to have it all. During a conference meeting, it’s easy to throw in features such as push notifications, cloud synchronization, multi-OS support, and many others, but it’s usually much harder to implement all of them in a timely manner and without exceeding the budget.

Instead of asking yourself “How can I build an app that takes advantage of all these cool things?” as Julian Mellicovsky puts it in his blog post, try to get in the mind of your users and focus on solving their most painful problems while prioritizing most-wanted features. This will prevent your app from becoming bloated and overly complex.

Use a similar approach to streamline the user interface and user experience. Focus testing is a great way how you can keep your own design sensibilities from intervening with what your users want and like to use.

  • Continue Reading
  • No Comments

Why Translators Fear Google

Written by Brooks Canavesi on December 30, 2016. Posted in Blog, Mobile App Development, Technology trends

When you type the phrase “translators jobs AI” into Google, the very first result leads to a Quora question titled “When will translators and interpreters lose their jobs to AI?” Paul Denlinger answered it by saying, “They will lose their jobs when computers understand context and nuance.” Well, with Google’s latest big update for Google Translator, that may happen sooner rather than later.

On September 27, 2016, Google has rolled out the Google Neural Machine Translation system (GNMT), “which utilizes state-of-the-art training techniques to achieve the largest improvements to date for machine translation quality,” explain Quoc V. Le and Mike Schuster in their blog post. Until the release of this update, Google had been relying on another model, which involves breaking sentences into individual phrases and breaking those phrases even further into words. The old model was known for producing convoluted and hard-to-understand translations, especially in the case of notoriously difficult language pairs such as Chinese and English.

But, after testing the new version of Google Translate, several tech commentators have reported that the system is now approaching human-level accuracy. Try translating a complex sentence—even one with historicisms and obscure names of places and events—and you will likely be just as surprised by the accuracy of the translation as Nick Statt from The Verge and others were.

That being said, the system is by no means perfect. “GNMT can still make significant errors that a human translator would never make, like dropping words and mistranslating proper names or rare terms,” Google admits. However, by using a system modeled after neural connections in the human brain, these gaps could quickly disappear. The system is even said to have developed “its own internal language to represent the concepts it uses to translate between other languages.”

Surprisingly, the U.S. Bureau of Labor Statistics  predicts a 46 percent increase in translation job opportunities between 2012 and 2022. Tony Guerra, the director of interpretation services at CETRA, even said, “Google Translate has a role: it can provide a very rough and overall sense. But it does not understand or distinguish certain phrases.” That’s exactly where both he and the U.S. Bureau of Labor Statistics may be wrong.

There’s no doubt that high-stakes translators won’t be replaced by computers anytime soon—perhaps even never—but what about millions of translators and interpreters who work on a freelance basis, dealing with an endless stream of mundane translations of descriptions for cheap products, internal company guidelines and policies, and high-school homework? ZDNet lists the profession of a translator as the 9th job that will be automated by AI and robots. Considering the article was posted more than a year ago, it would be interesting to see where would the author place translators now.

The key here is convenience. It takes time, money, and effort to hire a translator, but it takes just a few seconds to copy-paste the text into Google Translate. And when the software spits out something that looks legitimate, it’s easy to imagine many cases where someone would just decide to use what Google Translate thinks is the right translation. That right there represents an hour or two of work for some freelancer, which doesn’t seem significant until you realize that the Google Translate mobile and web apps account for around 18 million translations per day for Chinese to English alone.

Once Google perfects the technology behind their text-based translation system, they could use it in other, even more interesting ways. “A few wearable tech upstarts are showing interest in the area of translation hearables,” says Sophie Charara in her article for Wareable. One company, Doppler Labs, is developing a real-time hearable device that would allow for in-ear translation of Spanish. A start-up named Waverly Labs have raised $3.5 million on Indiegogo to fund a smart earpiece language translator which translates between users speaking different languages. The device won’t be out until May 2017, but it already serves as a clear evidence confirming the large demand for the technology.

It could be that the profession of a translator will change and adopt these technologies as essential tools for working in a more efficient manner. Machines could do the first draft and real human beings could then use their expertise and knowledge to polish the final product to perfection. But where accuracy and appropriateness of style don’t play a huge role, there will likely be no work left for real translators.

  • Continue Reading
  • No Comments

Daydream: Google’s Push into VR

Written by Brooks Canavesi on October 25, 2016. Posted in Blog, Mobile App Development, Technology trends

Two years have passed since Google’s humble entry into the virtual reality market with Cardboard, and time has come for the company to take their efforts a step further. Their vision for a VR ecosystem powered by capable mobile devices has a prophetic name: Daydream.

The idea behind the project isn’t to throw away Cardboard into the pits of technological obsolescence, but rather to offer customers the next logical destination on their journey towards an immersive VR experience. Clay Bavor, Google’s vice president of virtual reality, commented, “We knew that Cardboard would only go so far, because there’s only so much you can do in terms of immersiveness and interactivity with—let’s be serious—a piece of cardboard, and a phone that was really only meant to be a phone.”

Google is going to keep Cardboard around as a highly affordable way how VR newbies can get a glimpse of what the once futuristic technology is all about. When the same customers reach the limits of what’s possible with Cardboard, they will know exactly what to buy next: Daydream.

So, what is Daydream? According to Bavor, Daydream is about enabling very high-quality mobile VR.  The key word here is “high-quality”. Google is well aware that any truly immersive mobile VR experience will require a suitable hardware to run on. Not only that, but the entire ecosystem – apps, operating system, VR glasses – has to work in unison.

That’s why Google is working with their industry partners and in-house developers to create a VR platform the world has not seen before. The first Daydream-compatible smartphones – the ZTE Axon 7 and Asus Zenfone 3 Deluxe – are right around the corner, and so are the other two pillars of Daydream.

Three Pillars of Daydream

During his presentation at Google IO 2016, Bavor explained the steps Google took to create “our platform for virtual reality.” To succeed, Google needs to have a total control over the VR experience. Customers must know that when they purchase a Daydream-certified smartphone, they won’t run into any compatibility or performance issues.

There’s simply no more room for guesswork like there was (and still is and will be) with Cardboard. Samsung, with their Gear VR, has shown that customers are willing to pay extra for a smartphone that does something extra. But Samsung’s problem is the inability to control the experience from top to bottom. Users have to fight an operating system not meant for virtual reality and apps developed with only a vague idea of the kind of VR headset the apps are going to be consumed through. But all that is about to change.

Smartphones

Google doesn’t want any redundancy in their platform. The company expects smartphone manufacturers to bake all the functionality and features necessary for a smooth VR experience directly into their devices.

This entails powerful hardware capable of pushing demanding graphics at 60 frames-per-second, low-persistence displays with little to no lag to eliminate ghosting, and high-quality, latency-free sensors to increase the sense of immersion when inside a virtual reality world.

Prominent smartphone manufacturers, including Samsung, HTC, LG, Xiaomi, Huawei, ZTE, Asus, and Alcatel should have daydream-ready smartphones available this fall. The beauty of this approach lies in the fact that the whole platform can evolve in a similar way as video game consoles do. All that Google has to do to take Daydream to the next level is to bump up the reference specifications and call the result “Daydream 2.0” or something similar. With enough time, Daydream could start competing with PC-tethered headsets.

Headset and Controller

The current mobile VR experience is held back by the chaos that reigns among the headsets. The market is flooded with headsets ranging from a few dollars to few hundred dollars. Some are compatible with devices up to 6”, others can fit only smaller smartphones; certain models have a wide field of view and high-quality optics, but there’s a slew of other headsets that don’t even come close.

With Daydream, Google has taken their experience and created a reference design that others can use as a blueprint to come up with something of an equal quality. The sketch they showed at Google IO 2016 depicted a sleek, clean headset that should appeal to regular smartphone owners and tech enthusiasts alike.

To complement the headset, Google has also created a VR-optimized controller that’s both powerful and intuitive. Its DNA can be traced back to the Wii Nunchuk Controller, as a consequence of its minimalistic appearance and built-in orientation sensor that allows for intuitive motion control. A circular clickable touch pad is designed to bring touchscreen gestures to the virtual world, and two buttons – a home button and app button – provide all the essential functionality without overwhelming users with options.

Apps

The team behind Daydream has been working with the core Android team to include VR capabilities in the next version of the mobile operating system. Starting with Android N, users will be able to put their smartphones into a special VR mode, which will change the UI, making the device easily controllable even while still in VR. Substantial changes have also been made under the hood to allow for a very low latency of less than 20ms and increased performance.

Clay Bavor said that Google will take a strong stance on quality and not allow badly written, designed, or optimized apps to enter into the Daydream ecosystem. “We want to make sure that we’re representing good VR to our users.”

Google wants to lead by example with their apps. The Play Store, YouTube, and Street View are all going to be optimized at launch to deliver an uncompromising user experience. A new face among Google’s increasingly larger collections of apps is the Daydream Hub, a kind of home screen that acts as a gateway to all games and virtual reality apps.

More VR apps are expected to pop up in the near future, thanks to Google’s partnerships with the New York Time, WSJ, USA Today, Netflix, Hulu, IMAX, NBA, MLB, and many others. New VR games from EA, Ubisoft, NetEase, and OtherSide Entertainment are also on the horizon, so there’s no reason to doubt that there won’t be enough compelling content available when Daydream takes off.

It’s Coming

The release of Daydream is planned on Autumn 2016, shortly after Android Nougat is out. Given that Cardboard headsets hover around $10 and Gear VR currently goes for $100, Daydream will probably be positioned right above the Gear VR, in terms of its price.

Google’s vision of the future seems to involve ultra-powerful smartphones that act as keys to countless invisible virtual worlds around us. They recognize that once we reach the point where we no longer need to own a laptop or PC just to get more processing power, smartphones and mobile operating systems will become the only thing that matters.
  • Continue Reading
  • No Comments

BEACON TECHNOLOGY AND MOBILE MARKETING

Written by Brooks Canavesi on July 8, 2016. Posted in Blog, Mobile App Development, Software & App Sales, Uncategorized

If you live in a first-world country, chances are that most of your daily activity takes place indoors. Consequently, it might not be possible to use GPS to get accurate locational information. Beacons are a low-cost piece of hardware powered by Bluetooth Low Energy (BLE). Their main purpose is to provide an inexpensive way how to accurately target individual smartphone or tablet users and send messages or prompts directly to their devices.

Even though they are still in their infancy, ABI Research estimates suggest 3.9 million BLE beacons shipped globally in 2015. That’s because retailers, manufacturers, hotels, educational institutions, and governments see how transformative they could be for logistics, customer engagement, and information transmission.

Companies like Zebra are leading the way with innovative products like MPACT.  Zebra’s marketing site states “MPact is the only indoor locationing platform to unify Wi-Fi and Bluetooth® Smart technology, improving locationing accuracy, while allowing you to connect to the most possible customers and capture more analytics and insight. Service is re-defined through impactful interactions with customers via the one device they almost always have in hand – their mobile phone. The result? Instant visibility into where customers are in your facility – and the ability to automatically take the best action to best serve each customer at any time during their visit.”

According to ZDNet, the largest retail deployment of beacons to date was carried out by drug store chain Rite Aid. The company recently announced a distribution of proximity beacons in each of its 4,500 U.S. stores.

Statistics from Swirl, Mobile Presence Management and Marketing Platform, explain why: Relevant mobile offers delivered to smartphones while shopping in a store would significantly influence likelihood to make a purchase for 72% of consumers. What’s more, 80% of consumers would welcome the option to use a mobile app while shopping in a store if that app delivered relevant sales and promotional notifications. That’s a staggering improvement when compared to traditional push notifications, which are opened only about 14 percent of the time, according to mobile advertising firm Beintoo.

As more retailers implement beacons to offer flash sales, provide customers with more product information, and speed up the checkout process, we can expect a dramatic rise in the rate of their adoption. A report from BI Intelligence says that “US in-store retail sales influenced by beacon-triggered messages will see a nearly tenfold increase between 2015 and 2016, from $4.1 billion to $44.4 billion.”

Mobile marketers and developers will have to learn new tricks to fully capitalize on the wealth of opportunities that the beacon technology presents.
  • Continue Reading
  • No Comments
  • 1
  • 2

Blog Categories

  • Software & App Sales
    • Sales Strategy
    • Sales Management
  • Mobile App Development
    • User Experience & Interface Design
    • Technology trends
  • Technology Tips & Tricks
  • Personal

Tags

Fill Rate CTR boating icloud ios bigdata robotics ai hearables google cloud azure app dev smart home augmented reality smartdevices fitness virtual reality vr security mobility mobile mobile app mobile apps mobile application development wearables smart devices enterprise mobility ar 5g Xamarin Internet of things microsoft xiaomi smartglasses smartphone hud cellular design ipad wakeboarding 2005 eCPM in-app purchasing

  • Home
  • Blog
  • Contact
  • Home
  • Blog
  • Contact