Using Google’s MLKit and CameraX for Lightweight Barcode Scanning

Bea Kutis
7 min readMay 17, 2021

--

What is Google’s MLKit?

MLKit is a powerful Machine Learning library optimized for mobile applications. Previously known as ML Kit for Firebase, it is now a standalone library. It includes a variety of APIs — from barcode scanning to text recognition and translation. The Barcode Scanning API is easy to use, runs on the device (no network connection required), and decodes most standard barcode formats, including Linear formats such as UPC-A and UPC-E, as well as 2D formats such as QR Codes.

More documentation here

What is CameraX?

CameraX is a Jetpack library that solves a lot of frustrations Android developers would have to deal with while handling camera functionality. Things like aspect ratio and orientation are handled automatically, leaving the developer free to focus on building robust user experiences rather that worrying about camera configuration.

Similar to MLKit, CameraX is very easy to use. The functionality of CameraX is divided into different “use cases”. Currently there are three: Preview, Image analysis, and Image capture. The Preview use case is used for displaying a live view of the camera; the Image analysis use case lets us analyze that live preview, and the Image capture use case takes and saves a photo. To implement barcode scanning, we will use the Preview use case to display the camera to the user and Image analysis to send image buffers to MLKit to decode the barcodes.

More documentation here

Let’s Build!

  1. Setting Up

First, add the needed MLKit and CameraX dependencies to your build.gradle file:

def camerax_version = “1.0.0-rc01”
def camerax_view_version = “1.0.0-alpha20”
def mlkit_version = “16.1.3”
implementation “androidx.camera:camera-camera2:${camerax_version}”
implementation “androidx.camera:camera-lifecycle:${camerax_version}”
implementation “androidx.camera:camera-view:${camerax_view_version}”
implementation “com.google.android.gms:play-services-mlkit-barcode-scanning:${mlkit_version}”

Note: This MLKit dependency will dynamically download the barcode scanning model from Google Play Services. Alternatively, the model can be bundled with your app. More info here.

Dynamically downloading the model has pros and cons. Our APK size will not be affected but users can end up in a less-than-ideal state if the model fails to download when needed (more on this later).

Next, add the necessary permission to your Manifest so we can get access to the camera:

<uses-feature android:name=”android.hardware.camera” />
<uses-permission android:name=”android.permission.CAMERA” />

2. Create the View
We’ll need an XML file that contains a PreviewView to display the camera feed and a TextView to display the results of decoding our barcode.

<?xml version=”1.0" encoding=”utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android=”http://schemas.android.com/apk/res/android"
xmlns:app=”http://schemas.android.com/apk/res-auto"
xmlns:tools=”http://schemas.android.com/tools"
android:layout_width=”match_parent”
android:layout_height=”match_parent”
tools:context=”.MainActivity”>
<!-- PreviewView will display live camera feed -->
<androidx.camera.view.PreviewView
android:id=”@+id/cameraView”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintBottom_toTopOf=”@+id/bottomText”
android:layout_width=”match_parent”
android:layout_height=”match_parent” />
<!-- We'll use this textView to display the decoded barcode value -->
<TextView
android:id=”@+id/bottomText”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:background=”@color/white
android:padding=”32dp”
android:textSize=”24sp”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintLeft_toLeftOf=”parent”
app:layout_constraintRight_toRightOf=”parent”
tools:text=”Barcode Value: “/>
</androidx.constraintlayout.widget.ConstraintLayout>

PreviewView comes from the camera-view dependency we defined above and is the recommended view to use when defining a CameraX Preview use case. The PreviewView is the View that represents our live camera view.

3. Request permissions
Now let’s work on our Activity. First, we need to make sure the user has granted us permission to use the camera. This part is pretty standard, so I will refrain from explaining in depth. You can read the official documentation here for any additional context needed.

private const val CAMERA_PERMISSION_REQUEST_CODE = 1

class MainActivity : AppCompatActivity() {
private lateinit var binding: ActivityMainBinding

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = ActivityMainBinding.inflate(layoutInflater)
setContentView(binding.root)
if (hasCameraPermission()) bindCameraUseCases()
else requestPermission()
}
// checking to see whether user has already granted permission
private fun
hasCameraPermission() =
ActivityCompat.checkSelfPermission(
this,
Manifest.permission.CAMERA
) == PackageManager.PERMISSION_GRANTED
private fun requestPermission(){
// opening up dialog to ask for camera permission
ActivityCompat.requestPermissions(
this,
arrayOf(Manifest.permission.CAMERA),
CAMERA_PERMISSION_REQUEST_CODE
)
}
override fun onRequestPermissionsResult(
requestCode: Int,
permissions: Array<out String>,
grantResults: IntArray
) {
if (requestCode == CAMERA_PERMISSION_REQUEST_CODE
&& grantResults[0] == PackageManager.PERMISSION_GRANTED) {
// user granted permissions - we can set up our scanner
bindCameraUseCases()
} else {
// user did not grant permissions - we can't use the camera
Toast.makeText(this,
"Camera permission required",
Toast.LENGTH_LONG
).show()
}
private fun bindCameraUseCases() {
// TODO: Configure our CameraX use cases
}
}

Note: We’re using ViewBinding here:

binding = ActivityMainBinding.inflate(layoutInflater)

To use ViewBinding, make sure it is enabled in your app level build.gradle file:

android {
viewBinding {
enabled = true
}
...

4. Set up the Preview Use Case
Now, let’s flesh out our bindCameraUseCases() method.

private fun bindCameraUseCases() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)

cameraProviderFuture.addListener({
val cameraProvider = cameraProviderFuture.get()

// setting up the preview use case
val
previewUseCase = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(binding.cameraView.surfaceProvider)
}

// configure to use the back camera
val
cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

try
{
cameraProvider.bindToLifecycle(
this,
cameraSelector,
previewUseCase
//TODO: Add Image analysis use case
)
} catch (illegalStateException: IllegalStateException) {
// If the use case has already been bound to another lifecycle or method is not called on main thread.
Log.e(TAG, illegalStateException.message.orEmpty())
} catch (illegalArgumentException: IllegalArgumentException) {
// If the provided camera selector is unable to resolve a camera to be used for the given use cases.
Log.e(TAG, illegalArgumentException.message.orEmpty())
}
}, ContextCompat.getMainExecutor(this))
}

The most important parts here are our previewUseCase object and our call to cameraProvider.bindToLifecycle(). Remember the Preview use case allows us to show the live camera feed to the user. We make an instance of this use case (Preview.Builder.build()), connect it to the PreviewView we defined in our XML (it.setSurfaceProvider(binding.cameraView.surfaceProvider)), and bind it to the lifecycle of our Activity (cameraProvider.bindToLifecycle(this, cameraSelector, previewUseCase)). Calling bindToLifecycle() will complete the setup and allow the camera preview to actually render on the screen.

Your app should now show the live camera preview! Something like this:

A screenshot of the app showing the camera preview with a text box below
Cute cat not guaranteed

5. Set up ML Kit’s BarcodeScanner and the Image Analysis Use Case
In the same block of code where we’re defining our Preview use case, we can add an Image Analysis use case. This is what will allow us to take the live preview and perform logic with its contents.

// configure our MLKit BarcodeScanning client/* passing in our desired barcode formats - MLKit supports additional formats outside of the ones listed here, and you may not need to offer support for all of these. You should only specify the ones you need */val options = BarcodeScannerOptions.Builder().setBarcodeFormats(
Barcode.FORMAT_CODE_128,
Barcode.FORMAT_CODE_39,
Barcode.FORMAT_CODE_93,
Barcode.FORMAT_EAN_8,
Barcode.FORMAT_EAN_13,
Barcode.FORMAT_QR_CODE,
Barcode.FORMAT_UPC_A,
Barcode.FORMAT_UPC_E,
Barcode.FORMAT_PDF417
).build()
// getClient() creates a new instance of the MLKit barcode scanner with the specified options
val
scanner = BarcodeScanning.getClient(options)

// setting up the analysis use case
val
analysisUseCase = ImageAnalysis.Builder()
.build()

// define the actual functionality of our analysis use case
analysisUseCase.setAnalyzer(
// newSingleThreadExecutor() will let us perform analysis on a single worker thread
Executors.newSingleThreadExecutor(),
{ imageProxy ->
processImageProxy(scanner, imageProxy)
}
)

We also need to update our call to cameraProvider.bindToLifecycle() to include the new analysisUseCase :

cameraProvider.bindToLifecycle(this, cameraSelector, previewUseCase, analysisUseCase)

Within setBarcodeFormats()I’m defining my BarcodeScanner to accept a number of 2D barcode formats as well as QR codes. The desired formats will depend on what type of barcodes you will need to support. In order to optimize MLKit’s barcode scanning abilities, only the needed formats should be specified.

6. Decoding the barcodes
Our processImageProxy() function that we pass into analysisUseCase.setAnalyzer() will contain our actual meaningful logic. In this method, we will pass our image over to MLKit’s process method in order to decode the barcode:

private fun processImageProxy(
barcodeScanner: BarcodeScanner,
imageProxy: ImageProxy
) {

imageProxy.image?.let { image ->
val inputImage =
InputImage.fromMediaImage(
image,
imageProxy.imageInfo.rotationDegrees
)

barcodeScanner.process(inputImage)
.addOnSuccessListener { barcodeList ->
val barcode = barcodeList.getOrNull(0)
// `rawValue` is the decoded value of the barcode
barcode?.rawValue?.let { value ->
// update our textView to show the decoded value
binding.bottomText.text =
getString(R.string.barcode_value, value)
}
}
.addOnFailureListener {
// This failure will happen if the barcode scanning model
// fails to download from Google Play Services
Log.e(TAG, it.message.orEmpty())
}.addOnCompleteListener {
// When the image is from CameraX analysis use case, must
// call image.close() on received images when finished
// using them. Otherwise, new images may not be received
// or the camera may stall.
imageProxy.image?.close()
imageProxy.close()
}
}
}

Here, we’re taking our ImageProxy object that we received from the analysisUseCase Analyzer and using it to create an InputImage object — the type that MLKit is expecting. We’re then passing that InputImage into barcodeScanner.process() which instructs MLKit to decode any barcode present in the image. We’ll take the first barcode detected in the frame, pull out the rawValue (in most cases, a string of numbers, but in some cases, such as QR codes, plain text or URLs), and set that rawValue to be the text of our textView. In this basic example, doing so will allow us to ensure the decoding is working properly. For a real app we’d likely want to use this rawValue to pass to an API call and get more information to display, such as product images and titles.

Once this is complete, we clean up by closing the imageProxy. This is important, as not doing so may leave the user in a stalled state. Equally important is the failure case. This failure could happen if the user fails to connect to Google Play Services and download the needed barcode scanning model. If this happens, no analysis will take place and your app will do nothing. This would be a good spot to show some sort of error to the user and provide steps for them to resolve it (finding a better network connection, making sure they’re logged into the PlayStore, or upgrading their PlayStore version — all of which, though rare, could result in a failure the first time they use the app; after successfully downloading the model the first time, it will not need to be downloaded on subsequent usages).

Voilà!

Now, running the app, you should see a live camera preview. When a barcode is presented in that preview, MLKit should decode the barcode’s value and our TextView should show the results! Something a little like this:

A screenshot of the app with a barcode showing in the camera view, and the text box displaying the decoded barcode value

Check out the entire code sample on Github here

--

--

Bea Kutis
Bea Kutis

Responses (2)