Building on the foundation laid by Android 12, described by many as the biggest Android OS update since 2014, this year’s upcoming Android 13 release refines the feature set and tweaks the user interface in subtle ways. However, it also includes many significant behavioral and platform changes under the hood, as well as several new platform APIs that developers should be aware of. For large screen devices in particular, Android 13 also builds upon the enhancements and features introduced in Android 12L, the feature drop for large screen devices.

Android 13 is set for release later this year, but ahead of its public release, Google has shared preview builds so developers can test their applications. The preview builds provide an early look at Android 13 and introduces many — but not all — of the new features, API updates, user interface tweaks, platform changes, and behavioral changes to the Android platform. In this article, we’ll document all of the changes that we find so you can prepare your application or device for Android 13.

Before we dive in, a quick note: Since Android 13 is still in beta, future beta releases will introduce additional changes to the platform. Those changes may modify the feature set, APIs, behaviors, or the UI, but we will keep this article updated to reflect the latest information we have available.

To navigate this article, we highly recommend using the table of contents to navigate between sections. The ToC includes hyperlinks to each section header, so you can quickly navigate to a particular section. You can easily return to the ToC by clicking the link in the sidebar on the left if you’re browsing on desktop.


Table of Contents


When will Android 13 be released?

According to Google’s schedule, Android 13 will be released sometime in Q3 2022. There have been 2 developer previews and there will be 4 betas before the final release in Q3. Android 13 is expected to reach Platform Stability with the third beta release in June 2022. Once Platform Stability is reached, Android 13’s SDK and NDK APIs and app-facing system behaviors will be finalized. Since the Android 12L release came with framework API level 32, Android 13 will be released alongside framework API level 33.

Android 13 release timeline
The release timeline for Android 13. Source: Google.

The Developer Previews were intended for developers only and could thus only be installed manually. With the launch of the Android 13 beta program, however, Pixel users can enroll in the program to have the release roll out their devices over the air. Pixel devices that are eligible to install the Android 13 Beta include the Pixel 4, Pixel 4 XL, Pixel 4a, Pixel 4a (5G), Pixel 5, Pixel 5a with 5G, Pixel 6, and Pixel 6 Pro.

At Google I/O 2022, multiple OEMs launched their own Android 13 Developer Preview/Beta programs for a select few devices. These OEMs include ASUS, Lenovo, Nokia (HMD Global), OnePlus, OPPO, Realme, Sharp, Tecno, Vivo, Xiaomi, and ZTE, and cover devices including the ASUS ZenFone 8, Lenovo Tab P12 Pro, Nokia X20, OnePlus 10 Pro, OPPO Find X5 Pro, OPPO Find N, Realme GT 2 Pro, Sharp AQUOS sense6, Tecno CAMON 19 Pro, Vivo X80 Pro, Xiaomi 12, Xiaomi 12 Pro, Xiaomi Pad 5, and ZTE Axon 40 Ultra. The full list of Android 13 beta builds available from OEMs can be found on this page.

Users who do not have a Pixel device or one of the aforementioned OEM devices can try the Android 13 Beta by installing the newly released Generic System Image (GSI). Alternatively, Android 13 can be installed on PCs through the Android Emulator.

Developers that set up the Android 13 Beta are encouraged to offer feedback on the new features and APIs and test their app for compatibility issues.


What are the new features in Android 13?

Accessibility audio description

Under Accessibility settings, there’s a new “Audio Description” toggle. The description of this toggle reads as follows: “Select audio sound track with audio description by default.” The value of this toggle is stored in Settings.secure.enabled_accessibility_audio_description_by_default.

New Audio Description toggle in Android 13's Accessibility settings
Audio Description toggle in Android 13’s Accessibility settings.

In the Android 13 developer documentation, there’s a new isAudioDescriptionRequested method in AccessibilityManager that apps can call to determine if the user wants to select a sound track with audio description by default. As the documentation explains, audio description is a form of narration used to provide information about key visual elements in a media work for the benefit of visually impaired users. Apps can also register a listener to detect changes in the audio description state.

Accessibility magnifier can now follow the text as you type

Under Settings > Accessibility > Magnification, a new “Follow typing” toggle has been added that makes the “magnification area automatically [follow] the text as you type.” The value of this toggle is stored in Settings.Secure.accessibility_magnification_follow_typing_enabled. Here is a video showing this feature in action.

Quick Settings tiles for color correction, one-handed mode, Privacy Controls

In addition to the Quick Setting tile for the QR code scanner, Google has added other new tiles. These include:

  • A Quick Setting tile to toggle color correction.
  • A Quick Setting tile to toggle one-handed mode.
    • One-handed mode is disabled by default in AOSP but can be enabled with ‘setprop ro.support_one_handed_mode true’. One-handed mode settings won’t appear on large screen devices, but the Quick Setting tile can be appended to the set of active tiles by adding “onehanded” to Settings.Secure.sysui_qs_tiles.
  • A new Quick Setting tile to launch Privacy Controls, where users can toggle the camera, microphone, and location availability. Privacy Controls also contains a shortcut to launch security settings. The tile is provided by the PermissionController Mainline module.
  • The Quick Setting tile for “Device Controls” will have its title changed to “Home” when the user has selected Google Home as the Controls provider.
  • A Quick Setting tile to launch a QR code scanner. Read this section for more information on this feature.

Bluetooth LE Audio support

Bluetooth LE Audio is the next-generation Bluetooth standard defined by the Bluetooth SIG. It promises lower power consumption and higher audio quality using the new Low Complexity Communications Codec (LC3). The new standard also introduces features such as location-based audio sharing, multi-device audio broadcasting, and hearing aid support. 

There are multiple products on the market with hardware support for BLE Audio, and to prepare for the release of new BLE Audio-enabled audio products, Google has built support for LE Audio into Android 13. Android 13’s Bluetooth stack supports BLE Audio, from including an LC3 encoder and decoder to integrating support for detecting and swapping to the codec in developer options. Developers do not have to make any changes to their applications to take advantage of the new capabilities afforded by Bluetooth LE Audio.

One of the key features of BLE Audio is Broadcast Audio, which lets and audio source device broadcast audio streams to many audio sink devices. Android 13, of course, will support this feature. Devices with BLE Audio support will see an option to broadcast media when opening the media output picker. A dialog will inform users that they can “broadcast media to devices near [them], or listen to someone else’s broadcast.” Other users who are nearby with compatible Bluetooth devices can listen to media that’s being broadcasted by scanning a QR code or entering the name and password for the broadcast.

MIDI 2.0 support

Musicians will be delighted to learn that Android 13 introduces support for the MIDI 2.0 standard. MIDI 2.0 was introduced in late 2020 and adds bi-directionality so MIDI 2.0 devices can communicate with each other to auto-configure themselves or exchange information on available functionality. The new standard also makes controllers easier to use and adds enhanced, 32-bit resolution.

Settings for spatial audio and head tracking

Android 13 improves upon the initial spatial audio implementation introduced in Android 12L. Devices with an audio spatializer service may have a toggle in Bluetooth settings to enable spatial audio. This feature produces immersive audio that seems like it’s coming from all around you. However, the description in settings warns that spatial audio only works with some media. Audio can also be spatialized when played back through wired headphones or the phone’s speakers. Spatial audio support must be implemented by the device maker. The system property ‘ro.audio.spatializer_enabled’ should be set to true if an audio spatializer service is present and enabled, while Settings.Secure.spatial_audio_enabled holds the value of the spatial audio toggle.

Spatial AudioAudio from compatible media becomes more immersivePhone speakerSpatial Audio creates immersive sound that seems like it’s coming from all around you. Only works with some media.Wired headphonesOffOn / %1$sOn / %1$s and %2$s

If connected to a Bluetooth audio product with a head tracking sensor, Android 13 can also show a toggle in Bluetooth settings to enable head tracking. Head tracking makes audio sound more realistic by shifting the position of audio as you move your head around so it sounds more natural. Devices that can interface with Bluetooth products containing head tracking sensors should declare the feature ‘android.hardware.sensor.dynamic.head_tracker.’

Head trackingAudio changes as you move your head around to sound more natural

Cinematic wallpapers

Android 13 adds new system APIs that Google will be using to generate “3D wallpapers” that “[move] when your phone moves.” Within the latest version of the WallpaperPicker app included in Android 13 DP2 on Pixel devices, there are strings that hint at a new “Effects” tab being added to the interface. This tab will let users apply cinematic effects to their wallpaper, including the 3D wallpaper effect.

3D wallpapersWe were unable to apply the effects.\nTry with another photo.Oh no!EffectsMake your photo a 3D wallpaper that moves when your phone moves3D Wallpapers

Under the hood, this feature makes use of the new WallpaperEffects API. A new permission has been added to Android, android.permission.MANAGE_WALLPAPER_EFFECTS_GENERATION, which must be held by the app implementing the system’s wallpaper effects generation service in order to generate wallpaper effects. This permission was added because the wallpaper service is trusted and thus can be activated without the explicit consent of the user.

The system’s wallpaper effects generation service is defined in the new configuration value config_defaultWallpaperEffectsGenerationService. On Pixel, this value is set to com.google.android.as/com.google.android.apps.miphone.aiai.app.wallpapereffects.AiAiWallpaperEffectsGenerationService. This points to a component within the Android System Intelligence, however, there is no evidence of this component existing within current versions of the ASI app. It’s likely that only internal versions of the ASI app have this component. Since no service with this name exists on any of our test devices, the wallpaper effects generation service is disabled, hence we are unable to test this feature at the moment.

However, we are able to test another aspect of this feature: wallpaper dimming. Android’s WallpaperService has added several methods related to a new wallpaper dimming feature. It checks if wallpaper dimming is enabled through the value of persist.debug.enable_wallpaper_dimming before dimming the wallpaper set by the user. This feature is currently not enabled yet, but there’s a CLI used for testing that lets us see how different wallpapers appear at different dimming values. It’s accessed through the ‘cmd wallpaper’ command as follows:

$ cmd wallpaper

Wallpaper manager commands:
  help
    Print this help text.

  set-dim-amount DIMMING
    Sets the current dimming value to DIMMING (a number between 0 and 1).

  dim-with-uid UID DIMMING
    Sets the wallpaper dim amount to DIMMING as if an app with uid, UID, called it.

  get-dim-amount
    Get the current wallpaper dim amount.

Although Google’s service for implementing wallpaper effects is likely proprietary, the API seems to be open for any device maker to hook their own service into. The UI implementation in WallpaperPickerGoogle is also likely Google’s proprietary work, but other device makers could adapt the open source WallpaperPicker to add an Effects tab and a cinematic effects toggle as well.

Material You dynamic color styles

Google introduced dynamic color, one of the key features of Google’s new Material You design language, in Android 12 on Pixel phones. Dynamic color support is set to arrive on more devices from other OEMs in the near future, according to Google, due in large part to new GMS requirements. Google’s dynamic color engine, codenamed monet, grabs a single source color from the user’s wallpaper and generates 5 tonal palettes that are each comprised of 13 tonal colors of various luminances. These 65 colors make up the R.color attributes that apps can use to dynamically adjust their themes.

Each of these colors have undefined hue and chroma values that can be generated at runtime by monet. This is what Google is seemingly taking advantage of in Android 13 for a new feature that will likely let users choose from a handful of additional Material You tonal palettes, called “styles.”

In Android 13, Google is working on new styles that adjust the hue and chroma values when generating the 5 Material You tonal palettes. These new styles are called TONAL_SPOT, VIBRANT, EXPRESSIVE, SPRITZ, RAINBOW, and FRUIT_SALAD. The TONAL_SPOT style will generate the default Material You tonal palettes as seen in Android 12 on Pixel. VIBRANT will generate a tonal palette with slightly varying hues and more colorful secondary and background colors. EXPRESSIVE will generate a palette with multiple prominent hues that are even more colorful. SPRITZ generates an almost grayscale, low color palette.

The specs of these new styles are defined in the new com.android.systemui.monet.Styles class. These new style options are hooked up to SystemUI’s ThemeOverlayController, so Fabricated Overlays containing the 3 accent and 2 neutral tonal palettes can be generated using these new specs. The WallpaperPicker/Theme Picker app interfaces with SystemUI’s monet by providing values to Settings.Secure.THEME_CUSTOMIZATION_OVERLAY_PACKAGES in JSON format.

Users can run the following shell command to generate a tonal palette using these style keys:

adb shell settings put secure theme_customization_overlay_packages '''{\"android.theme.customization.theme_style\":\"STYLE\"}'''

where STYLE is one of TONAL_SPOT, VIBRANT, EXPRESSIVE, SPRITZ, RAINBOW, or FRUIT_SALAD.

In Beta 1, Google is using these styles as strategies to generate a whole range of new theme options. In the “Wallpaper & style” app on Pixel devices, there are now up to 16 “wallpaper colors” and 16 “basic colors” to choose from.

These styles are employed as follows:

  • Wallpaper colors
    • Option #1, 5, 9, and 13 are based on TONAL_SPOT
    • Option #2, 6, 10, and 14 are based on SPRITZ
    • Option #3, 7, 11, and 15 are based on VIBRANT
    • Option #4, 8, 12, and 16 are based on EXPRESSIVE
  • Basic colors
    • Option #1-4 are based on TONAL_SPOT
    • Option #5-12 are based on RAINBOW
    • Option #13-16 are based on FRUIT_SALAD

Running the following command will reveal the current mThemeStyle as well as the 5 tonal palette arrays:

dumpsys activity service com.android.systemui/.SystemUIService | grep -A 16 "mSystemColors"

Resolution switching

Android 13 introduces support for switching the resolution in the Settings app. A new “Screen resolution” page will appear under Settings > “Display” on supported devices that lets the user choose between FHD+ (1080p) or QHD+ (1440p), the two most common screen resolutions seen on handhelds and tablets.

Screen resolution settings in Android 13
Screen resolution settings in Android 13

The availability of these options depends on the display modes exposed to Android. The logic is contained within the ScreenResolutionController class of Settings.

Under the hood, Google has tweaked Android’s display mode APIs so that the resolution and refresh rate can be persisted for each display in a multi-display device, such as foldables. In addition, new APIs can now be used to set the display mode (or only the resolution or refresh rate). These settings are persisted in the following values:

  • Settings.Global.user_preferred_resolution_width
  • Settings.Global.user_preferred_resolution_height
  • Settings.Global.user_preferred_refresh_rate

Turn on dark mode at bedtime

Android’s dark mode is adding a new trigger: bedtime. Users can activate dark mode at their configured bedtime schedule on supported devices. On GMS devices, the bedtime schedule is typically configured via Google’s Digital Wellbeing app.

This feature was hidden from users in the earlier Android 13 preview builds but could be enabled by toggling the feature flag “settings_app_allow_dark_theme_activation_at_bedtime” in Developer Options. This feature flag could also be toggled by sending the following shell command:

adb shell settings put global settings_app_allow_dark_theme_activation_at_bedtime true

As of Beta 2, the “turns on at bedtime” option is available to all users.

Hub mode

Google believes that tablets are the future of computing, so they’ve recently invested in a new tablet division at Android which has helped oversee some of the new features in Android 12L, the feature update for large screen devices. Some of the major changes in Android 12L focus on improving the overall experience of tablets, but in Android 13, Google is preparing to improve one particular use case.

Android 13 Developer Preview 1 reveals early work on a new “hub mode” feature, referred to internally as “communal mode”, that will let users share apps between profiles on a common surface. Code reveals that users will be able to pick from a list of apps that support hub mode, though it isn’t clear what requirements an app needs to meet to support hub mode. Once selected, the apps will be accessible by multiple users on the common surface. The primary user can restrict which Wi-Fi APs the device has to be connected to in order for applications to be shared, though. These networks are considered “trusted networks”.

Connected networkPreviously connected networksHub modeThere are no apps which support hub modeShared appsUse hub modeTrusted networks

It isn’t yet entirely clear what form the common surface will take. Initially, we believed that the common surface would be the lock screen, which has seen some other multi-user improvements in Android 13. However, new code related to “dreams”, Android’s code-name for interactive screensavers, not only points towards a revamp of the old feature but also tie-ins with the new “hub mode”. Couple this with new dock-related code, both in Android 13 and in the kernel, suggests that Google is planning something big for tablets that are intended to be fixed in place on a dock.

Since hub mode is still a work-in-progress, we are not able to demonstrate the feature. Enabling the feature first requires that the build declare support for the feature ‘android.software.communal_mode.’ Then, one needs to set SystemUI’s boolean flag ‘config_communalServiceEnabled’ to true. From there, however, there are several missing pieces, including the communalSourceComponent and communalSourceConnector packages as well as much of the code for the common surface. We also couldn’t find the interface for adding applications to the allowlist for communal mode, which is stored in Settings.Secure.communal_mode_packages.

However, we were at least able to access the screen for choosing “trusted networks”.

Screen saver revamp

Google introduced screen savers to Android back in Android 4.2 Jelly Bean, but since the feature’s introduction, it has received few major enhancements. As an aside, screen savers used to be called “daydreams” but were renamed in Android 7.0 Nougat to avoid confusion with Daydream VR, the now-defunct phone-based VR platform. Google still refers to screen savers as “dreams” internally, though, which is important for us to note. That’s because Android 13 introduces a lot of new dream-related code in SystemUI, suggesting that significant changes are on the way.

New classes in Android 13 reveal work on a dream overlay service that is intended to allow “complications” to run on top of screen savers. In Wear OS land, a complication is a service that provides data to be overlaid on a watch face. It appears that dreams will borrow this concept, with some of the available complications including air quality, cast info, date, time, and weather.

Air QualityCast InfoDateTimeWeatherDisplay time, date and weather on the screen saverShow additional information

In Developer Preview 2, the screen saver settings page was revamped to show previews. The available screen savers are shown in a grid with a customize button at the center of each item. A preview button at the bottom lets users see what the screen saver is like. Meanwhile, Beta 1 introduces a toggle to turn off the feature, replacing the “never” option from “when to start.”

In addition, Google appears to be adding a page to the setup wizard so users can select a screen saver when setting up their device. No other changes to the settings were implemented, but it’s likely that Google is planning other enhancements to the screen saver experience.

However, we’ll have to wait for the company to release more preview builds to learn more. Given the evidence we’ve seen, however, we’re confident in saying that the company is preparing major enhancements to screen savers, though whether these changes will land in time for the final Android 13 release we cannot say.

Switch to admin user when docked

The first Developer Preview of Android 13 revealed a new “hub mode” feature in development that will let users share apps between profiles. The second Developer Preview reveals a new setting related to this feature that will seemingly let secondary profiles automatically switch to the primary user after docking the device. Switching to the primary user presumably allows the device to then enter “hub mode”.

The new setting is called “switch to admin user when docked.” It’s available in Android 13’s multi-user settings, but it isn’t shown to users unless the framework config value ‘config_enableTimeoutToUserZeroWhenDocked’ is set to ‘true’. The setting allows users to choose how long Android should wait before automatically switching to the primary user after the device is docked. Timeout values of “never”, “after 1 minute”, and “after 5 minutes” are currently supported.

NFC & NFC-F payment support for work profiles

Android 13 introduces NFC payment support for work profiles. Previously, only the primary user could perform contactless payments and access Settings > Connection preferences > NFC >  Contactless payments. Work profiles can now also use NFC-F (FeliCa) on supported devices.

WiFi Trust on First Use

Android 13 adds support for Trust On First Use (TOFU). When it is not possible to configure the Root CA certificate for a server, TOFU enables installing the Root CA certificate received from the server during initial connection to a new network. The user must approve installing the Root CA certificate. This simplifies configuring TLS-based EAP networks. TOFU can be enabled when configuring a new network in Settings > Network & Internet > Internet > Add network > Advanced options > WPA/WPA-2/WPA-3-Enterprise > CA certificate > Trust on First Use. Enterprise apps can configure WiFi to enable or disable TOFU through the enableTrustOnFirstUse API.

Per-app language preferences

In the Settings app under the System > Languages & input > Languages submenu, users can choose their preferred language. However, this language is applied system-wide, which may not be what multilingual users necessarily prefer. Some applications offer their own language selection feature, but not every app offers this. In order to reduce boilerplate code and improve compatibility when setting the app’s runtime language, Android 13 is introducing a new platform API called LocaleManager which can get or set the user’s preferred language option.

Users can access the new per-app language preferences in Android 13 by going to Settings > System > Languages & input > App Languages. Here, the user can set their preferred language for each app, provided those apps include strings for multiple languages. The app’s language can also be changed by going to  Settings > Apps > All apps > {app} > Language. These settings were hidden from users in Beta 1 due to a bug, but the settings entry was restored in Beta 2.

In order to help app developers test the per-app language feature, the first few Android 13 preview builds list per-app language preferences for all apps by default. However, the list of languages that’s shown to the user may not match the list of languages that an app actually supports. Developers must list the languages their apps actually support in the locales_config.xml resource file and point to it in the manifest with the new android:localeConfig attribute. Starting with Android 13 Beta 3, apps that do not provide a locales_config.xml resource file will not be shown in the per-app language preferences page.

Media Tap To Transfer

Android 13 contains references to a “Media Tap To Transfer” feature. The feature still appears to be in development, so it is difficult to determine exactly how it works. However, by enabling a hidden SystemUI flag, the feature’s command line interface becomes available. These commands are likely used by Google to quickly prototype the sender/receiver flow. The commands are as follows:

cmd statusbar media-ttt-chip-sender
cmd statusbar media-ttt-chip-receiver

According to Android Police, the media tap to transfer chip will be shown when a user is playing media on a local “media cast sender” device (e.g. a smartphone) and they move their device close enough to a “media cast receiver” device (e.g. a tablet) that they own. The chip will encourage the user to “transfer” the media from the sender to the receiver device. It isn’t clear how the media transfer happens, however.

We would like to thank Danny Lin (@kdrag0n) for his assistance in enabling this feature.

System Photo Picker

To prevent apps with the broad READ_EXTERNAL_STORAGE permission from accessing sensitive user files, Google introduced Scoped Storage in Android 10. Scoped Storage narrows storage access permissions to encourage apps to only request access to the specific file, types of files, directory, or directories they need, depending on their use case. An app can either use the media store API to access media files stored within well-defined collections, or they can use the Storage Access Framework (SAF) to load the system document picker to let the user pick which files they want to share with that app.

Android’s system document picker app — simply called “Files” — provides a barebones file picking experience. Android 13, however, is introducing a new system photo picker that extends the Files app with a new experience for picking photos and videos. The new system photo picker will help protect photo and video privacy by making it easier for users to pick the specific photos and videos to share with an app. Like the Files app, the new system photo picker can share photos and videos stored locally or on cloud storage.

Apps can use the new photo picker APIs in Android 13 to prompt the user to pick which photos or videos to share with the app, without that app needing permission to view all media files.

The new photo picker experience will also roll out to Android devices running Android 11 or higher (excluding Android Go Edition) through an update to the MediaProvider module, which contains the Media Storage APK with the activity. Though Google has yet to roll out the feature to older Android versions, users can enable it through shell commands:

Non-root: cmd device_config put storage_native_boot picker_intent_enabled true
Root: setprop persist.sys.storage_picker_enabled true

It’s easier to install apps to guest profiles

When creating a guest user in Android 13, the owner can choose which apps to install to the guest profile. No data is shared between the owner and guest profiles, however, which means that the guest profile will still need to sign in to those apps if need be.

This feature was hidden from users starting in Developer Preview 2.

App drawer in the taskbar

The taskbar that Google introduced for large screen devices in Android 12L could only show up to 6 apps on the dock. In Android 13, an app drawer button has been added to the taskbar that lets users see and launch their installed apps. 

This feature is controlled by the Launcher3 feature flag ENABLE_ALL_APPS_IN_TASKBAR and is enabled by default on large screen devices.

Clipboard editor overlay

In Android 11, Google tweaked the screenshot experience by adding an overlay that sits in the bottom left corner of the screen. This overlay appears after taking a screenshot, and it contains a thumbnail previewing the screenshot, a share button, and an edit button to open the Markup activity.

In Android 13, Google has expanded this concept to clipboard content. Now, whenever the user copies text or images, a clipboard overlay will appear in the bottom left corner. This overlay contains a preview of the text or image that has been copied as well as an edit button that, when tapped, opens the Markup activity (for images) or a lightweight text editing activity (for text). If the text that’s been copied contains actionable information such as an address, phone number, or URL, then an additional chip may be shown to send the appropriate intent.

This feature was present but disabled by default in Developer Preview 2. It has since been enabled by default as of Beta 1.

Disable the long-press home button action

Under Settings > System > Gestures > System navigation, a new submenu has been added for the 3-button navigation that lets you disable “hold Home to invoke assistant”. After disabling this feature, a long press of the home button will no longer launch the default assistant activity.

Drag to launch multiple instances of an app in split-screen

Android 13 supports dragging to launch multiple instances of the same activity in split-screen view. The MULTIPLE_TASK flag is applied to the launch intent to let activities supporting multiple instances show side-by-side.

Launch an app in split screen from its notification

In Android 13 Developer Preview 2, it’s now possible to launch an app in split-screen multitasking mode by long-pressing its notification and then dragging and dropping to either half of the screen. This feature was actually introduced in Android 12L but was disabled by default. It is still quite inconsistent, so it is likely that the feature is still in development and may be disabled or removed in a future release. This video shows the feature in action.

Predictive back gesture navigation

Android 13 promises to make back navigation more “predictive”, though not in the sense of using machine learning to improve back gesture recognition. Instead, it appears that Android 13 is attempting to address the ambiguity of what happens when performing the back gesture. The feature will let users preview the destination or other result of a back gesture before they complete it, letting them decide whether they want to continue with the gesture or stay in the current view.

To complement this feature, the launcher is adding a new back-to-home transition animation that will make it very clear to the user that performing a back gesture will exit the app back to the launcher. The new back-to-home animation scales the app window as the user’s finger is swiping inward, similar to the swipe up to home animation. The user will see a snapshot of the home screen or app drawer as they’re swiping, indicating that completing the gesture will exit the app back to the launcher.

In order to make this new animation possible, Android 13 is changing the way apps handle back events. The system lets apps register back invocation callbacks through the new OnBackInvokedCallback platform API or OnBackPressedCallback API in the AppCompat (version 1.6.0-alpha03 or later) library. If the system detects there aren’t any registered handlers, then it can play out the new predictive back gesture animation because it can “predict” what to do when the user completes the back gesture. If there are layers that have registered handlers, on the other hand, then the system will invoke them in the reverse order in which they are registered. 

Previously, the system wouldn’t always be able to predict what would happen when the user tries to go back, because individual activities could have their own back stacks that the system isn’t aware of and apps could override the behavior of back navigation. The way back events are handled in Android 13 enables a more intuitive back navigation experience while also letting apps continue to handle custom navigation.

Apps can opt in to the new predictive back gesture navigation system by setting the new enableOnBackInvokedCallback Manifest attribute to “true”. In the stable release of Android 13, there will be a new “predictive back animations” toggle in developer options that will let developers test the new back-to-home animation. The new back dispatching behavior will be enabled by default for apps targeting Android 14 (API level 34).

A demo of the new back-to-home animation can be seen at the 10:18 mark of the “what’s new in Android” video from Google I/O 2022.

Google is working to bring feature parity between the taskbar’s app drawer on large screen devices and the app drawer on handheld devices. The taskbar’s new app drawer now shows a predictions row and will support showing a search bar. The former is enabled by default while the latter is controlled by a feature flag (ENABLE_ALL_APPS_ONE_SEARCH_IN_TASKBAR) during testing. However, the search bar currently doesn’t appear with this flag enabled in Beta 1.

Predictions row in the taskbar's app drawer
As of Beta 1, the taskbar’s app drawer now shows the app predictions row.

Bandwidth throttling

Simulating slow network conditions can be useful for development and debugging, but Android hasn’t provided an easy way to throttle network speeds until the latest Android 13 release. In Android 13, a new setting in Developer Options lets developers set a bandwidth rate limit for all networks capable of providing Internet access, whether that be Wi-Fi or cellular networks. This setting is called “network download rate limit” and has 6 options, ranging from “no limit” to “15Mbps.”

For more information on this feature, please refer to this article.

7-day view in privacy dashboard

Android 12 introduced the “Privacy dashboard” feature which lets users view the app that have accessed permissions marked as “dangerous” (ie. runtime permissions). The dashboard only shows data from the past 24 hours, but in Android 13, a new “show 7 days” button is being tested that will show permissions access data from the past 7 days.

This feature is not enabled by default. It is possible that this feature will roll out to Android 12 devices as well with an update to the PermissionController Mainline module.

Clipboard auto clear

Android offers a clipboard service that’s available to all apps for placing and retrieving text. Many keyboard apps like Google’s Gboard extend the global clipboard with a database that stores multiple items. Gboard even automatically clears any clipboard item that’s older than 1 hour.

Although any app can technically clear the primary clip in the global clipboard (so long as they’re either the foreground app or the default input method on Android 10+), Android itself does not automatically clear the clipboard. This means that any clipboard item left in the global clipboard could be read by an app at a later time, though Android’s clipboard access toast message will likely alert the user to this fact.

Android 13, however, has added a clipboard auto clear feature. This feature, which is disabled by default, will automatically clear the primary clip from the global clipboard after a set amount of time has passed. By default, the clipboard is cleared after 3600000 milliseconds has passed (60 minutes), matching Gboard’s functionality.

The logic for this new feature is contained within the ClipboardService class of services.jar. Here is a demonstration of the clipboard auto clear feature in Android 13 with a timeout of 5 seconds:

This feature is enabled by default starting in the Android 13 Beta.

Control smart home devices without unlocking the device

Android 11 introduced the Quick Access Device Controls feature which lets users quickly view the status of and control smart home devices like lights, thermostats, and cameras. Apps can use the ControlsProviderService API to tell SystemUI which controls it can show in the Device Controls area. The device maker can choose where to surface the Device Controls area, but in AOSP Android 12, it can be opened through a shortcut on the lock screen or Quick Settings panel. However, if the user opens Device Controls while the device is locked, then they will only be able to see and not control any of their smart home devices.

In Android 13, however, apps can let users control their smart home devices without having them unlock their devices. The isAuthRequired method has been added to the Control class, and if it returns “true”, then users can interact with the control without authentication. This behavior can be set per-control, so developers do not need to expose all device controls offered by their app to interaction without authentication. The following video demonstrates the new API in action:

Starting with Beta 1, a new setting is available under Settings > Display > Lock screen called “control from locked device.” When enabled, users can “control external devices without unlocking your phone or tablet if allowed by the device controls app.”

Control from locked device toggle in Lock screen settings of Android 13

QR code scanner shortcut

QR codes have been an indispensable tool during the COVID-19 pandemic, as they’re a cheap and highly accessible way for a business to lead users to a specific webpage without directly interacting with them. In light of the renewed importance of QR codes, Google is implementing a handy shortcut in Android 13 to launch a QR code scanner.

Specifically, Android 13 implements a new Quick Setting tile to launch a QR code scanner. Android 13 itself won’t ship with a QR code scanning component, but it will support launching a component that does. The new QRCodeScannerController class in SystemUI defines the logic, and the component that is launched is contained within the device_config value “default_qr_code_scanner”. On devices with GMS, Google Play Services manages device_config values, and hence sets the QR code scanner component as com.google.android.gms/.mlkit.barcode.ui.PlatformBarcodeScanningActivityProxy.

The Quick Setting tile is part of the default set of active Quick Settings tiles. Its title is “QR code” and its subtitle is “Tap to scan.” The tile is grayed out if no component is defined in the device_config value “default_qr_code_scanner”. Within the Settings.Secure.sysui_qs_tiles settings value that keeps track of the tiles selected by the current user, the value for the QR code scanner tile is “qr_code_scanner”.

There is also a lock screen entry point for the QR code scanner, which is controlled by the framework flag ‘config_enableQrCodeScannerOnLockScreen.’ This value is set to false by default. Currently, Android 13 does not provide a user-facing setting to control the visibility of the lock screen entry point, but this will likely change in a future release.

Unified Security & Privacy settings

During Google I/O, Google announced that it will introduce a unified Security & Privacy settings page in Android 13. This new settings page will consolidate all privacy and security settings in one place, and it will also provide a color-coded indicator of the user’s safety status and guidance on how to boost security. The “Security” settings page on Pixel devices already shows a color-coded indicator of the user’s safety status and provides guidance, but it does not integrate privacy settings.

Unified security & privacy settings in Android 13
The new Security & Privacy settings in Android 13. Source: Google.

The new Security & Privacy settings page is contained within the PermissionController APK delivered through the PermissionController module. The component is com.google.android.permissioncontroller/com.android.permissioncontroller.safetycenter.ui.SafetyCenterActivity (for the Google-signed module), but the activity won’t launch unless the feature flag is enabled. This feature flag can be enabled by sending the following command:

cmd device_config put privacy safety_center_is_enabled true

While this makes the Security & Privacy settings page appear in top-level Settings, the actual page is not accessible in Beta 2 due to a missing privileged permission.

Toggle to show the vibrate icon in the status bar

Android places an icon in the status bar to reflect the sound mode, but in Android 12, the vibrate icon no longer showed when the device was in vibrate mode. Many users complained about this change, and in response, Google has added a toggle in Android 13 under Settings > Sound & vibration that restores the vibrate icon in the status bar when the device is in vibrate mode. This toggle is available under “Sound & vibration” as “Always show icon when in vibrate mode” and its value is stored in Settings.Secure.status_bar_show_vibrate_icon.

The setting to show the vibrate icon in the status bar in Android 13
Vibrate icon in status bar on Android 13
The vibrate icon appearing in Android 13’s status bar

This feature has been backported to Android 12 QPR3.

Vibration sliders for alarm and media vibrations

Under Settings > Sound & vibration > Vibration & haptics, sliders to configure the alarm and media vibration levels have been added. 

Alarm and media vibration sliders in Android 13's Settings
Alarm and media vibration sliders in Settings.

In conjunction with this change, the Settings configuration flag controlling the supported intensity level (config_vibration_supported_intensity_levels) has been updated to be an integer, so device makers can specify how many distinct levels are supported.


What are the UI changes in Android 13?

Consolidated font and display settings

The “font size” and “display size” settings under Settings > Display have been consolidated into a single page, called “display size and text.” The unified settings page also shows a preview for how changes to the font and display size affect icon and text scaling. It also includes two toggles previously found in Accessibility settings: “bold text” and “high contrast text.”

Display size and text settings in Android 13
Display size and text settings in Android 13

Low light clock when docked

Android has multiple features to display useful information while the device is idling, including a screen saver and ambient display. The former is set to receive a major revamp in Android 13 as part of Google’s overall effort to improve the experience of docked devices, while the latter is set to be joined by a simpler variant.

Android 13 includes a new “low light clock” that simply displays a TextClock view in a light shade of gray. This view is only shown when the device is docked, the ambient lighting is below a certain brightness threshold, and the SystemUI configuration value ‘config_show_low_light_clock_when_docked’ is set to ‘true.’

Bottom search bar in the launcher app drawer

Android 13 DP2 on Pixel has a new feature flag that, when enabled, shifts the search bar in the app drawer to the bottom of the screen. The search bar remains at the bottom until the keyboard is opened, after which it’ll shift to stay above the keyboard.

This feature is disabled by default but can be enabled by setting ENABLE_FLOATING_SEARCH_BAR to true. It remains to be seen if this behavior is exclusive to Google’s Pixel Launcher fork or if this will be available in AOSP Launcher3.

Custom interface for PCs

Android can run on a variety of hardware, including dedicated devices like kiosks, but Google only officially supports a handful of device types. These device types are defined in the Compatibility Definition Document (CDD), and they include handheld devices (like phones), televisions, watches, cars, and tablets. When building Android for a particular device, device makers need to declare the feature corresponding to the device type; for example, television device implementations are expected to declare the feature ‘android.hardware.type.television’ to tell the system and apps that the device is a television.

Since Android apps can also run on Chromebooks, Google created the ‘android.hardware.type.pc’ device type a few years back so apps can target traditional clamshell and desktop computing devices and the framework can recognize apps that have been designed for those form factors. However, it wasn’t until Android 12L that Google decided to revamp the UI for large screen devices, and in Android 13, Google is taking another step in that direction.

On PC devices, the launcher’s taskbar is tweaked to show dedicated buttons for notifications and quick settings. These buttons are persistently shown on the right side of the taskbar, where the 3-button navigation keys would ordinarily be displayed on other large screen devices.

In addition, I noticed that all apps are launched in freeform multi-window mode by default. Freeform multi-window was introduced in Android 7.0 Nougat and to this day remains hidden behind a developer option. Google may be getting ready to enable freeform multitasking support by default on large screen devices like PCs, but this remains to be seen.

Within Launcher3 is a new navigation bar mode called “kids mode.” When enabled on the large screen devices, the drawables and layout for the back and home icons are changed, the recents overview button is hidden, and the navigation bar is kept visible when apps enter immersive mode.

Kids mode nav bar in Android 13

This feature is controlled by the boolean value Settings.Secure.nav_bar_kids_mode.

During the development of Android 12L, Google experimented with unifying the home screen and app drawer search experiences. This experimented was gated by the ENABLE_ONE_SEARCH flag, but it was removed from the Launcher3 codebase prior to the AOSP release.

This unified search bar returned in Android 13 with the release of Beta 1, but it was disabled by default. To enable it, the following command needed to be sent:

cmd device_config put launcher enable_one_search true

As of Beta 2, however, this search bar is now available by default.

Lock screen rotation enabled on large screen devices

Android’s framework configuration controlling lock screen rotation has been set to “false” by default for years, but it is now enabled by default. In Android 13, the lock screen will only rotate on large screen devices, however.

Lock screen in landscape orientation on Android 13

Redesigned media output picker UI

In Android 10, Google introduced an output picker that lets users switch audio output between supported audio sources, such as connected Bluetooth devices. This output picker is accessed by tapping the media output picker button in the top-right corner of the media player controls. Now in Android 13, Google has revamped the media output picker UI.

Android 13's redesigned media output picker.
Redesigned media output picker UI in Android 13

The highlight of the new media output picker UI is the larger volume slider for each connected device.

Redesigned media player UI

In Android 11, Google reworked the media player controls to support multiple sessions and integration with the notifications shade. Now in Android 13, Google has revamped the media player UI.

The new media player UI features a larger play/pause button that’s been shifted to the right side, a (squiggly) progress slider that’s at the bottom left in line with the rest of the media control buttons, and the media info on the left side. The album art is displayed in the background, and the color scheme of the media output switcher button is extracted from the album art.

The UI of the long-press context menu for the media player has also been updated. The shortcut to settings has been moved to a gear in the upper right corner, and the “hide” button is now filled.

Squiggly progress bar

The progress bar in the media player now shows a squiggly line up to the current timestamp.

A short screen recording showing Android 13’s squiggly progress bar.

In Beta 1, the squiggly progress bar was centered at the bottom of the media player. In Beta 2, the progress bar has been shortened and is now shown at the bottom left.

Fullscreen user profile switcher

In an effort to improve the experience of sharing a device, Google has introduced numerous improvements to the multi-user experience. One change that’s in development is a fullscreen user profile switcher.

Android 13's fullscreen user profile switcher for large screen devices
Android 13’s fullscreen user profile switcher

This interface is likely intended for large screen devices that have a lot of screen real estate. It’s currently disabled by default but can be enabled through the configuration value config_enableFullscreenUserSwitcher.

Revamped UI for adding a new user

The UI for creating a new profile has been redesigned in Android 13. Users now have a few options of varying colors to choose from when choosing a profile picture, or they can take a photo using the default camera app or choose an image from the gallery.

Status bar user profile switcher

Google is experimenting with placing a status bar chip that displays the current user profile and, when tapped, opens the user profile switcher. This chip is not enabled by default in current Android 13 builds, but it can be enabled by setting the SystemUI flag flag_user_switcher_chip to true. Given the limited space available on smartphones, it’s likely this feature is intended for large screen devices like tablets.

User switcher on the keyguard

In Android 13, the keyguard screen (ie. the lock screen PIN/password/pattern entry page) can show a large user profile switcher on the top (in portrait mode) or on the left (in landscape mode). This feature is disabled by default but is controlled by the SystemUI boolean ‘config_enableBouncerUserSwitcher’.

Button rearrangement in the notification shade

Google has moved the power, settings, and profile switcher buttons in the notification shade. Previously, they were located directly underneath the Quick Settings panel. Now, they are located at the very bottom, tucked to the right.

Do Not Disturb may be rebranded to Priority mode

Do Not Disturb mode, the feature that lets users choose what apps and contacts can interrupt them, was renamed to Priority mode in Developer Preview 2. Apart from the branding change, the schedules page has been redesigned to use switches instead of toggles and now shows summaries for schedules and calendar events (instead of just whether they’re “on” or “off”). Schedules list the days and times for which they’re active, while calendar events show what events they’re triggered on.

Android 13 Beta 1 brought back the original Do Not Disturb branding, so it seems that the “Priority mode” branding isn’t here to stay.

Enabling silent mode disabled all haptics

When setting the sound mode to “silent”, all haptics were disabled in the Android 13 developer previews, even those for interactions (such as gesture navigation). On Android 12L, “vibration & haptics” are similarly grayed out with a warning that says “vibration & haptics are unavailable because [the] phone is set to silent”, but in our testing, haptics for interactions still worked. This is not the case in the Android 13 developer previews, however. Fortunately, this change has been reverted in the Android 13 beta release.


What are the behavioral changes in Android 13?

Media controls are now derived from PlaybackState

MediaStyle is a notification style used for media playback notifications. In its expanded form, it can show up to 5 notification actions which can be chosen by the media application. Prior to Android 13, the system displays media controls based on the list of notification actions added to the MediaStyle notification.

Starting with Android 13, the system will derive media controls from PlaybackState actions rather than the MediaStyle notification. If an app doesn’t include a PlaybackState or targets an older SDK version, then the system will fall back to displaying actions from the MediaStyle notification. This change aligns how media controls are rendered across Android platforms.

Screenshots of media controls on a phone and tablet running Android 13. Credits: Google.

Android 13 can show up to five action buttons based on the PlaybackState. In its compact state, the media notification will only show the first three action slots. The following table lists the action slots and the criteria the system uses to display each slot.

How Android 13 decides which buttons to show in the media player notification

Control an app’s ability to turn on the screen

A new appop permission has been added to Android 13 that lets users control whether or not an application can turn on the screen. Users can go to “Settings > Apps > Special app access > Turn screen on” to choose which apps can turn the screen on. All apps that hold the WAKE_LOCK permission appear in this list, save for SystemUI.

Defer boot completed broadcasts for background restricted apps

Android allows applications to start up at boot by listening for the ACTION_BOOT_COMPLETED or ACTION_LOCKED_BOOT_COMPLETED broadcasts, which are both automatically sent by the system. Android also lets users place apps into a “restricted” state that limits the amount of work they can do while running in the background. However, apps placed in this “restricted” state are still able to receive the ACTION_BOOT_COMPLETED and ACTION_LOCKED_BOOT_COMPLETED broadcasts. This will change in Android 13.

Android 13 will defer the ACTION_BOOT_COMPLETED and ACTION_LOCKED_BOOT_COMPLETED broadcasts for apps that have been placed in the “restricted” state and are targeting API level 33 or higher. These broadcasts will be delivered after any process in the UID is started, which includes things like widgets or Quick Settings tiles.

App developers can test this behavior in one of two ways. First, developers can go to Settings > Developer options > App Compatibility Changes and enable the DEFER_BOOT_COMPLETED_BROADCAST_CHANGE_ID option. This is enabled by default for apps targeting Android 13 or higher. Alternatively, developers can manually change the device_config flag that controls this behavior as follows:

cmd device_config put activity_manager defer_boot_completed_broadcast N

where N can be 0 (don’t defer), 1 (defer for all apps), 2 (defer for UIDs that are background restricted), or 4 (defer for UIDs that have targetSdkVersion T+).

These conditions can be combined by bit-OR-ing the values. By default, the flag is set to ‘6’ to defer the ACTION_BOOT_COMPLETED and ACTION_LOCKED_BOOT_COMPLETED broadcasts for all apps that are both background restricted and targeting Android 13.

Foreground service manager and notifications for long-running foreground services

Android 13’s new Foreground Services (FGS) Task Manager shows the list of apps that are currently running a foreground service. This list, called Active apps, can be accessed by pulling down the notification drawer and tapping on the affordance. Each app will have a “stop” button next to it.

The FGS Task Manager lets users stop foreground services regardless of target SDK version. Here’s how stopping an app via FGS Task Manager compared to swiping up from the recents screen or pressing “force stop” in settings.

How stopping an app via the foreground service task manager in Android 13 differs from swiping up in recents and force closing it in settings
Comparing behavior with “swipe up” and “force stop” user actions. Source: Google.

The system will send a notification to the user inviting them to interact with the FGS Task Manager after any app’s foreground service has been running for at least 20 hours within a 24-hour window. This notification will read as “[app] is running in the background for a long time. Tap to review.” However, it will not appear if the foreground service is of type FOREGROUND_SERVICE_TYPE_MEDIA_PLAYBACK or FOREGROUND_SERVICE_TYPE_LOCATION.

Certain applications are exempted from appearing in the FGS Task Manager. These include system-level apps, safety apps holding the ROLE_EMERGENCY role, and all apps when the device is in demo mode. Certain apps cannot be closed by the user even if they appear in the FGS Task Manager, including device owner apps, profile owner apps, persistent apps, and apps that have the ROLE_DIALER role.

For more information on the new system notification for long-running foreground services, visit this page. For more information on the new foreground services task manager, visit this page.

Job priorities

Android’s JobInfo API lets apps submit info to the JobScheduler about the conditions that need to be met for the app’s job to run. Apps can specify the kind of network their job requires, the charging status, the storage status, and other conditions. Android 13 expands these options with a job priority API, which lets apps indicate their preference for when their own jobs should be executed.

The scheduler uses the priority to sort jobs for the calling app, and it also applies different policies based on the priority. There are 5 priorities ranked from lowest to highest: PRIORITY_MIN, PRIORITY_LOW, PRIORITY_ DEFAULT, PRIORITY_HIGH, and PRIORITY_MAX.

  • PRIORITY_MIN: For tasks that the user should have no expectation or knowledge of, such as uploading analytics. May be deferred to ensure there’s sufficient quota for higher priority tasks. 
  • PRIORITY_LOW: For tasks that provide some minimal benefit to the user, such as prefetching data the user hasn’t requested. May still be deferred to ensure there’s sufficient quota for higher priority tasks.
  • PRIORITY_DEFAULT: The default priority level for all regular jobs. These have a maximum execution time of 10 minutes and receive the standard job management policy.
  • PRIORITY_HIGH: For tasks that should be executed lest the user think something is wrong. These jobs have a maximum execution time of 4 minutes, assuming all constraints are satisfied and the system is under ideal load conditions.
  • PRIORITY_MAX: For tasks that should be run ahead of all others, such as processing a text message to show as a notification. Only Expedited Jobs (EJs) can be set to this priority.

Notifications for excessive background battery use

When an app consumes a lot of battery life in the background during the past 24 hours, Android 13 will show a notification warning the user about the excessive background battery usage. Android will show this warning for any app the system detects high battery usage from, regardless of target SDK version. If the app has a notification associated with a foreground service, though, the warning won’t be shown until the user dismisses the notification or the foreground service finishes, and only if the app continues to consume a lot of battery life. Once the warning has been shown for an app, it won’t appear for another 24 hours.

Android 13 measures an app’s impact on battery life by analyzing the work it does through foreground services, Work tasks (including expedited work), broadcast receivers, and background services.

Prefetch jobs that run right before an app’s launch

Apps can use Android’s JobScheduler API to schedule jobs that should run sometime in the future. The Android framework decides when to execute the job, but apps can submit info to the scheduler specifying the conditions under which the job should be run. Apps can mark jobs as “prefetch” jobs using JobInfo.Builder.setPrefetch() which tells the scheduler that the job is “designed to prefetch content that will make a material improvement to the experience of the specific user of the device.” The system uses this signal to let prefetch jobs opportunistically use free or excess data, such as “allowing a JobInfo#NETWORK_TYPE_UNMETERED job run over a metered network when there’s a surplus of metered data available.” A job to fetch top headlines of interest to the current user is an example of the kind of work that should be done using prefetch jobs.

In Android 13, the system will estimate the next time an app will be launched so it can run prefetch jobs prior to the next app launch. Internally, the UsageStats API has been updated with an EstimatedLaunchTimeChangedListener, which is used by PrefetchController to subscribe to updates for when the system thinks the user will next launch the app. If a prefetch job hasn’t started by the time the app has been opened (ie. is on TOP), then the job is deferred until the app has been closed. Apps cannot get around this by scheduling a prefetch job with a deadline, as apps targeting Android 13 are not allowed to set deadlines for prefetch jobs. Prefetch jobs are allowed to run for apps with active widgets, though.

The Android Resource Economy

With every new release, Google further restricts what apps running in the background can do, and Android 13 is no exception. Instead of creating a foreground service, Google encourages developers to use APIs like WorkManager, JobScheduler, and AlarmManager to queue tasks, depending on when the task needs to be executed and whether the device has access to GMS. For the WorkManager API in particular, there’s a hard limit of 50 tasks that can be scheduled. While the OS does intelligently decide when to run tasks, it does not intelligently decide how many tasks an app can queue or whether a certain task is more necessary to run.

Starting in Android 13, however, a new system called The Android Resource Economy (TARE) will manage how apps queue tasks. TARE will delegate “credits” to apps that they can then “spend” on queuing tasks. The total number of “credits” that TARE will assign (called the “balance”) depends on factors such as the current battery level of the device, whereas the number of “credits” it takes to queue a task will depend on what that task is for.

From a cursory analysis, it seems that the EconomyManager in Android’s framework lists how many Android Resource Credits each job takes, the maximum number of credits in circulation for AlarmManager and JobScheduler respectively, and other information pertinent to TARE. For example, the following “ActionBills” are listed, alongside how many “credits” it takes to queue a task: ALARM_CLOCK, NONWAKEUP_INEXACT_ALARM, NONWAKEUP_INEXACT_ALLOW_WHILE_IDLE_ALARM, NONWAKEUP_EXACT_ALARM,  NONWAKEUP_EXACT_ALLOW_WHILE_IDLE_ALARM, WAKEUP_INEXACT_ALARM, WAKEUP_INEXACT_ALLOW_WHILE_IDLE_ALARM, WAKEUP_EXACT_ALARM, and WAKEUP_EXACT_ALLOW_WHILE_IDLE_ALARM.

TARE is controlled by the Settings.Global.enable_tare boolean, while the AlarmManager and JobScheduler constants are stored in Settings.Global.tare_alarm_manager_constants and Settings.Global.tare_job_schedule_constants respectively. TARE settings can also be viewed in Developer Options. Starting in Beta 1, TARE settings now supports editing the system’s parameters directly without the use of the command line.

Also in Beta 1 is a big revamp to the way TARE works under the hood. One of the biggest changes in the first Beta is the separation of “supply” from “allocation” of Android Resource Credits. Previously, the credits that apps could accrue to “spend” on tasks was limited by the “balances” already accrued by other apps. There was a “maximum circulation” of credits that limited how many credits could be allocated to all apps. The “maximum circulation” has been removed and replaced with a “consumption limit” that limits the credits that can be consumed across all apps within a single discharge cycle. This lets apps accrue credits regardless of the balances of other apps. The consumption limit scales with the battery level, so the lower the battery level, the fewer actions that can be performed.

Updated rules for putting apps in the restricted App Standby Bucket

Android 9 introduced App Standby Buckets, which define what restrictions are placed on an app based on how recently and how frequently the app is used. Android 9 launched with 4 buckets: active, working set, frequent, and rare. Android 12 introduced a fifth bucket called restricted, which holds apps that consume a great deal of system resources or exhibit undesirable behavior. Once placed into the restricted bucket, apps can only run jobs once per day in a 10-minute batched session, run fewer expedited jobs, invoke one alarm per day, and use FCM messages only for messages that result in user-visible interactions. Unlike with the other buckets, these restrictions apply even when the device is charging but are loosened if the device is idle and on an unmetered network.

Android 13 updates the rules that the system uses to decide whether to place an app in the restricted App Standby Bucket. If an app exhibits any of the following behavior, then the system places the app in the bucket:

  • The user doesn’t interact with the app for 8 days. 
  • The app invokes too many broadcasts or bindings in a 24-hour period.
  • The app drains a significant amount of battery life during a 24-hour period. The system looks at work done through jobs, broadcast receivers, and background services when deciding the impact on battery life. The system also looks at whether the app’s process has been cached in memory.

If the user interacts with the app in one of a number of ways, then the system will take the app out of the restricted bucket and put it into a different bucket. The user may taps on a notification sent by the app, perform an action in a widget belonging to the app, affect a foreground service by pressing a media button, connect to the app through Android Automotive OS, or interact with another app that binds to a service of the app in question. If the app has a visible PiP window or is active on screen, then it is also removed from the restricted bucket.

Apps that meet the following criteria are exempted from entering the restricted bucket in the first place:

  • Has active widgets
  • Has the SCHEDULE_EXACT_ALARM, ACCESS_BACKGROUND_LOCATION, or ACCESS_FINE_LOCATION permission
  • Has an in-progress and active MediaSession

All system and system-bound apps, companion device apps, apps running on a device in demo mode, device owner apps, profile owner apps, persistent apps, VPN apps, apps with the ROLE_DIALER role, and apps that the user has explicitly designated to provide “unrestricted” functionality in settings are also exempted from entering the restricted bucket (and all other battery-preserving measures introduced in Android 13).

Hardware camera and microphone toggle support

Android 12 added toggles in Quick Settings and Privacy settings to enable or disable camera and microphone access for all apps. Developers can call the SensorPrivacyManager API introduced with Android 12 to check if either toggle is supported on the device, and in Android 13, this API has been updated so developers can check whether the device supports a software or hardware toggle.

Hardware switches for camera and microphone access are typically not found on smartphones, but they do appear in many television and smart display products. These devices may have 2-way or 3-way hardware switches on the product itself or on a remote, but in Android 12, the toggle state of these switches wouldn’t be reflected in Android’s built-in camera and microphone toggles. This, however, will change in Android 13, which supports propagating the hardware switch state.

Devices with hardware camera and microphone switches should set the ‘config_supportsHardwareCamToggle’ and ‘config_supportsHardwareMicToggle’ framework values to ‘true’.

Non-matching intents are blocked

Prior to Android 13, apps can send an intent to an exported component of another app even if the intent doesn’t match an element in the receiving app. This made it the responsibility of the receiving app to sanitize the intent, but many often didn’t. To tighten security, Android 13 will block non-matching intents that are sent to apps targeting Android 13 or higher, regardless of the target SDK version of the app sending the intent. This essentially makes intent filters actually act like filters for explicit intents.

Android 13 will not enforce intent matching if the component doesn’t declare any elements, if the intent originates from within the same app, or if the intent originates from the system UID or root user.

Sideloaded apps may be blocked from accessing Accessibility APIs

Android’s Accessibility APIs are incredibly powerful as they allow for reading the contents of the screen or performing inputs on behalf of the user. These functions are often misused by banking trojans to steal data from users, which is why Google has been cracking down on misuse of the Accessibility API. Android 13 introduces further restrictions on the use of the Accessibility API, intended to target apps that are sideloaded from outside of an app store.

Android 13 may block the user from enabling an app’s accessibility service depending on how the app was installed. If the app was installed by an app that uses the session-based package installation API, then users will not be blocked from enabling the app’s accessibility service. If an app was installed by an app that uses the non-session-based package installation API, however, then users will initially be blocked from enabling the app’s accessibility service.

The reason Android doesn’t apply restrictions to apps installed via the session-based package installation API is that this installation method is often used by app stores. On the other hand, the non-session-based package installation API is often used by apps that handle APK files but that aren’t interested in acting as an app store, such as file managers, mail clients, or messaging apps.

This restriction is tied to a new appop permission called ACCESS_RESTRICTED_SETTINGS. Depending on the mode, the permission may allow or deny access to the app’s accessibility services page in settings. When an app is newly installed via the non-session-based API, ACCESS_RESTRICTED_SETTINGS is set to “deny”. This causes the settings entry for the app’s accessibility service to be grayed out and the dialog “for your security, this setting is currently unavailable” to be shown when tapping the entry. After viewing the dialog, however, the mode is set to “ignore”. The user can then go to the app info settings for the app in question, open the menu, and then press “allow restricted settings” to unblock access to the app’s accessibility service. Doing so changes the mode to “allow”, which is the mode that would have been set if the user had installed the app via an app that used the session-based API.

Developers can use the appops CLI to test this new permission and behavior. For example, by sending ‘cmd appops set ACCESS_RESTRICTED_SETTINGS ’, where is the name of the application package and is allow|ignore|deny, it’s possible to manually change the mode for an application. The command ‘cmd appops query-op ACCESS_RESTRICTED_SETTINGS ’ can also be used to query what mode ACCESS_RESTRICTED_SETTINGS is set to for each application.

Although Android 13 Beta 1 currently only gates access to accessibility service settings behind this permission, it is likely that they will expand this to other sensitive permissions commonly used by malware, such as the notification listener.

For more information on this change, refer to my previous article that covered the background and implementation details in more depth.

Faster hyphenation

When text reaches the end of a line in a TextView, rather than exceed the margin and go off screen, a line break will be inserted and the text will wrap around to the next line. Hyphens can be inserted at the end of the line to make the text more pleasant to read if a word is split, but enabling hyphenation comes at a performance cost. Google found that, when hyphenation is enabled, up to 70% of the CPU time spent on measuring text is on hyphenation. Thus, hyphenation was disabled by default in Android 10.

However, Android 13 significantly improves hyphenation performance by as much as 200%. This means that developers can enable hyphenation in their TextViews with little to no impact on rendering performance. To make use of the optimized hyphenation performance in Android 13, developers can use the new fullFast or normalFast frequencies when calling TextView’s setHyphenationFrequency method.

Improved Japanese text wrapping

Apps that support the Japanese language can now wrap text by “Bunsetsu”, the smallest unit of words that’s coherent, instead of by character. This makes text more readable by Japanese users. Developers can take advantage of this wrapping by using android:lineBreakWordStyle=”phrase” with TextViews.

Demonstrating improved Japanese text wrapping in Android 13
Japanese text wrapping with phrase style enabled (below) and without (above). Source: Google.

Improved line heights for non-Latin scripts

Support for non-Latin scripts such as Tamil, Burmese, Telugu, and Tibetan has improved in Android 13. The new version now uses a line height that’s adapted for each language, preventing clipping and improving the positioning of characters. Developers just need to target Android 13 to take advantage of these improvements in their apps, however, they should be aware these changes may affect the UI when using apps in non-Latin languages.


What are the platform changes in Android 13?

Audio HAL 7.1 with Ultrasound & latency mode

Android 13’s audio framework has added a system API for an Ultrasound input source and content type, requiring version 7.1 of the audio HAL. This API can only be accessed by apps holding the ACCESS_ULTRASOUND permission, which has a protection level of system|signature.

In addition, audio HAL v7.1 adds APIs for controlling output stream variable latency mode. Latency mode control is required if the device plans to support spatial audio with head tracking over a Bluetooth A2DP connection. There are two types of latency modes: FREE (ie. no specific constraint on the latency) and LOW (a relatively low latency compatible with head tracking operations, typically less than 100ms).

Virtual Audio Device

Android supports creating virtual displays of arbitrary resolution and density, and by specifying the ID of the virtual display, it’s also possible to launch applications directly onto it. In order to support streaming applications from that virtual display to a remote device, Android needs to support capturing both the video and audio of applications running on virtual displays. Video capture is already well-supported by Android, but audio capture of apps running on virtual displays has not been supported until now.

In Android 13, Google has added a new createVirtualAudioDevice API in the VirtualDevice class. This API returns a VirtualAudioDevice object for callers to capture audio and inject microphone into applications running on a virtual display. A VirtualAudioController service listens for changes in applications running on the virtual display as well as changes in the playback and recording config. This service notifies the VirtualAudioSession to update the AudioRecord/AudioTracker inside the AudioCapture/AudioInjection class internally.

HDR video support in Camera2 API

Android 13’s camera HAL lets device makers expose whether or not their device supports 10-bit camera output to the Camera2 API. The new REQUEST_AVAILABLE_CAPABILITIES_DYNAMIC_RANGE_TEN_BIT constant in CameraMetadata indicates that the device supports one or more 10-bit camera outputs specified in DynamicRangeProfiles.getSupportedProfiles. Implementations that expose 10-bit camera output must at least support the HLG10 profile, though if they support other profiles, they can advertise the recommended (in terms of image quality, power, and performance) profile to apps through the CameraCharacteristics#REQUEST_RECOMMENDED_TEN_BIT_DYNAMIC_RANGE_PROFILE constant. Apps using the Camera2 API can set the dynamic range profile using the OutputConfiguration.setDynamicRangeProfile API.

Stream use cases

Camera2 in Android 13 adds support for “stream use cases” that lets device makers optimize the camera pipeline based on the purpose of the stream. For example, the camera device could have a stream use case defined for video calls so that the optimal configuration is provided for video conferencing apps using the Camera2 API. Depending on the stream use case, the camera device may tweak the tuning parameters, camera sensor mode, image processing pipeline, and 3A (AE/AWB/AF) behaviors. The new REQUEST_AVAILABLE_CAPABILITIES_STREAM_USE_CASE constant in CameraMetadata indicates that the device supports one or more stream use cases. Apps can query the list of supported stream use cases through the CameraCharacteristics#SCALER_AVAILABLE_STREAM_USE_CASES field. Google requires that implementations supporting stream use cases support DEFAULT, PREVIEW, STILL_CAPTURE, VIDEO_RECORD, PREVIEW_VIDEO_STILL, and VIDEO_CALL. Apps can set the stream use case using the OutputConfiguration.setStreamUseCase API.

Multi-generational LRU support

Linux currently manages pages using two pairs of least recently used (LRU) lists, one pair for file-backed pages and another for anonymous pages. Each pair contains just one active and one inactive list. Pages that have just been accessed are placed at the top of the active list followed by other pages that have been recently accessed, while pages that haven’t been recently accessed are eventually moved to the inactive list. The problem with this approach is that Linux often places pages in the wrong list, may evict useful file-backed pages when there are idle anonymous pages to purge, and scans for anonymous pages using a CPU-heavy reverse mapping algorithm.

To solve these problems, Google developed a new page reclaim strategy called “multi-generational LRU.” Multi-generational LRU further divides the LRU lists into generations spanning between an “oldest” and “youngest” generation. The more generations, the more accurately Linux will evict pages that are acceptable to evict. Since the multi-generational LRU framework scans the page table directly, it avoids the costly reverse lookup of page table entries that the current approach takes. Google’s fleetwide profiling shows an “overall 40% decrease in kswapd CPU usage,” an “85% decrease in the number of low-memory kills at the 75th percentile,” and an “18% decrease in app launch time[s] at the 50th percentile.”

This new page reclaim strategy is in the process of being merged to the upstream Linux kernel, but it has already been backported to the android13-5.10 and android13-5.15 branches of the Android Common Kernel. The feature can be enabled by compiling the kernel with the CONFIG_LRU_GEN flag and then sending the

echo y > /sys/kernel/mm/lru_gen/enabled

command.

For a high-level overview of how Linux’s virtual memory management works and how multi-generational LRU improves page reclamation, I recommend reading my Android Dessert Bites entry covering these topics.

Android’s Bluetooth stack is now a Mainline module

Android 13 has migrated Android’s Bluetooth stack from system/bt to packages/modules/Bluetooth, making it updateable as a Project Mainline module. Work is ongoing in AOSP to add new features to the Bluetooth stack — such as support for the new Bluetooth LE Audio standard.

Android’s ultra-wideband stack becomes a Mainline module

The new com.android.uwb APEX module contains Android’s ultra-wideband stack. Ultra-wideband, or UWB for short, is a short-range, high-frequency wireless communication protocol that is commonly used for precise positioning applications, such as pinpointing the location of a lost object that’s nearby. Android 12 first introduced UWB support but restricted API access. With a new Jetpack library on the way, a reference HAL in the works, and an updateable Mainline module, Android 13 will expand the use of UWB hardware for new software features.

Tweaks to updatable NNAPI drivers

In order to improve the consistency and performance of machine learning on Android, Google announced Android’s “updateable, fully integrated ML inference stack” at Google I/O last year. In conjunction with Qualcomm, Google would roll out NNAPI driver updates to devices through Google Play Services. Google originally planned to roll out driver updates to devices running Android 12, but they delayed their updatable platform drivers plans to Android 13.

Recently, however, an AOSP code change submitted by a Google engineer stated that the Android ML team “ultimately did not move forward with its updatability plans,” which would have had “updated platform drivers delivered through GMSCore.” Although this code change suggests that Google has abandoned its plans to deliver NNAPI driver updates through Google Play Services, Oli Gaymond, Product Lead on the Android ML team, confirmed that the company still has plans to ship updatable NNAPI drivers. He noted however that “there have been some design changes” that are “currently in testing” but that driver updates will still be delivered via Play Services.

This article has more information on Google’s plans for updatable NNAPI platform drivers.

Improvements to Dynamic System Updates

Google introduced Dynamic System Updates (DSU) in Android 10 to enable installing a generic system image (GSI) without overwriting the device’s original system partition and wiping the original user data partition. This is possible because DSU creates a new dynamic partition (on devices that support this userspace partitioning scheme) and loads the downloaded system image onto the new partition. The downloaded system image and generated data image are stored within the original data partition and are deleted when the user is finished testing on the GSI.

Development on DSU has been sparse since its initial release, but several improvements are being introduced to the feature in Android 13. These changes are documented in AOSP and include performance and UI improvements. Installing a GSI through DSU will be significantly faster in Android 13 thanks to a code change that enlarges the default shared memory buffer size. This brings the time to install a GSI down to under a minute on physical devices. Next, the user can now see what partition DSU is currently installing in the progress bar, which is now weighted to reflect that writable partitions take less time to complete installing.

Custom component for the Quick Access Wallet activity

Android 11 introduced the Quick Access Wallet feature to let users quickly select which card to use for contactless payments. These cards are provided by apps that implement the Quick Access Wallet API, while the surface on which these cards are displayed is provided by a system app. In Android 11, the wallet activity was provided by a standalone system app, while in Android 12, it was provided by SystemUI.

SystemUI still provides the wallet activity by default in Android 13, but device makers can specify a different component. Device makers can provide the configuration to be used by QuickAccessWalletController by defining a method in QuickAccessWalletClient. Alternatively, device makers can use the new boolean attribute ‘useTargetActivityForQuickAccess’ in QuickAccessWalletService to set whether the system should use the component specified by the android:targetAttribute activity (true) in SystemUI or the default wallet activity provided by SystemUI (false).

Basic support for WiFi 7

WiFi 7 is the marketing name for IEEE 802.11be, the next-generation WiFi standard that promises incredibly fast speeds and very low latency. Anshel Sag, Principal Analyst at Moor Insights & Strategy, explains the most important features in the new WiFi standard that are driving these improvements. “Wi-Fi 7 adds features like 4K QAM modulation for higher peak throughput (compared to 1024). In addition to the higher-order modulation, probably the biggest feature in Wi-Fi 7 is the addition of multi-link which comes in multiple flavors and adds the ability to aggregate spectrum across multiple bands which wasn’t possible before or to switch between those bands to use the band with the least interference/latency.”

The first products with WiFi 7 support will likely launch at the end of this year or early next year, well ahead of the standard’s finalization in early 2024. In preparation for these product launches, Android 13 adds preliminary support for WiFi 7. Android 13’s DeviceWiphyCapabilities class, which “contains the WiFi physical layer attributes and capabilities of the device”, has 802.11be in the list of standards and 320MHz as a supported channel width. The DeviceWiphyCapabilities class contains standard nl80211 commands to query the WiFi standards and channel bandwidths supported by the WiFi driver, which Android’s wificond process uses to communicate with the driver.

Evidence of Android 13’s basic support for WiFi 7 can be found in the 21st edition of Android Dessert Bites.

DNS over HTTPS support

Android has had native support for DNS over TLS, more commonly known as DoT, since Android 9. Android 13 now adds native support for DNS over HTTPS, ie. DoH. DoT uses TLS to encrypt DNS traffic while DoH uses HTTP or HTTP/2 to send queries and responses instead of over UDP. Another difference is that DoT uses a dedicated port for DNS traffic while DoH uses port 443, so all DNS traffic is mixed with other HTTPS traffic.

Google is currently experimenting with whether or not to enable DoH support by default in Android 13. The functionality is delivered to devices as part of the DNS Resolver module and can be enabled through the device_config boolean flag “doh” under the “netd_native” namespace.

Fast Pair now available in AOSP

Fast Pair, Google’s proprietary protocol for the nearby detection and pairing of Bluetooth devices, appears to be headed to AOSP as part of the new “com.android.nearby” modular system component. The Fast Pair service is currently implemented in Google Play Services, thus requiring Google Mobile Services to be bundled with the firmware. However, the new NearbyManager system API will be available in AOSP. This will let OEMs set up their own server to sync and serve certified Fast Pair devices’ metadata.

Users can toggle Fast Pair scanning in Settings > Connected devices > Connection preferences > Fast Pair. The value of this setting is held in the integer Settings.Secure.fast_pair_scan_enabled.

For more details on this platform change and how it will impact the Android ecosystem, refer to this article.

Multiple Enabled Profiles on a single eSIM

In order to use multiple subscriptions from one or more carriers, Android devices need as many physical SIM slots as there are subscriptions. This can include multiple SIM card slots, multiple eSIM modules, or a combination of SIM cards and eSIMs. This is because both SIM cards and eSIMs currently only support a single active SIM profile.

Android 13, however, includes an implementation of Multiple Enabled Profiles (MEP), a method for enabling multiple SIM profiles on a single eSIM. This takes advantage of the fact that eSIMs already support storing multiple SIM profiles, so by creating a logical interface between the eSIM and the modem and multiplexing it onto the physical interface, more than one SIM profile stored on an eSIM can interface with the modem. Support for MEP requires an update to the radio HAL but does not require changes to the underlying hardware or modem firmware.

For more information on MEP, please read this article which covers the patent behind this method and the Android APIs that local profile assistant (LPA) apps are expected to use.

APK Signature Scheme v3.1 support

Android 9 Pie introduced support for APK Signature Scheme v3, which made it possible to rotate signing keys. According to the documentation, APK Signature Scheme v3 has the option to include a proof-of-rotation record in its signing block for each signing certificate, enabling apps to be signed with a new signing certificate that’s linked to the past signing certificate used to sign the APK. In order to support installing streamed APKs in Android 11+, Google introduced APK Signature Scheme v4, which stores the signature in a separate file.

Now in Android 13, Google is introducing APK Signature Scheme v3.1, which addresses some of the known issues with APK key rotation on earlier OS versions. This scheme lets apps support original and rotated signers in a single APK, meaning they can target Android 13 or later for rotation without needing to configure multi-targeting APKs. APK Signature Scheme v3.1 uses a new block ID that isn’t recognized on Android 12L or earlier, so earlier releases will use the original signer in the v3.0 block. Furthermore, the new scheme supports SDK version targeting, which allows key rotation to target a later release.

Better error reporting for Keystore and KeyMint

For apps that use the Android Keystore system to store cryptographic keys, Android 13 has an exception that details failures in generating or using a key. The public error codes indicate the cause of the error, while the methods indicate if the error was caused by a system/key issue and if retrying the operation with the same or new key may succeed.

Full-disk encryption support removed from Android

Android has supported encrypting the contents of the user data partition through two different schemes: full-disk encryption (FDE) and file-based encryption (FBE). FDE encrypts the entire data partition using a key derived from the user’s PIN, passcode, or password, and before Android boots, the user is required to decrypt the partition. FBE, on the other hand, allows different files to be encrypted with different keys (that are still cryptographically bound to the user’s lock screen authentication method), offering more flexibility through the Direct Boot feature.

GMS requirements require that devices launching with Android 10 or later use file-based encryption (FBE), so Google is removing support for converting devices from FDE to FBE. Furthermore, Android 13 has fully removed support for FDE, so the OS will not recognize the encrypted data partition of devices that haven’t migrated.

Hardware support for Android’s Identity Credential API

To support the development of mobile driver’s licenses applications on Android, Google created the Identity Credential API. This API provides an interface to a secure store for user identity documents, including not just mobile driver’s licenses but any generic document type. The API can be implemented with or without hardware support, but implementing hardware support enables a greater level of security and privacy.

If a device maker chooses to implement hardware support, they must implement the Identity Credential HAL. Implementing this HAL enables the ability to store identity documents in the device’s secure hardware, which on most devices is their Trusted Execution Environment. Few device makers have implemented the IC HAL, but in Android 13, Google plans to make its implementation a requirement for new chipset launches.

For more details on this upcoming platform change, please refer to this article.

Legacy fs-verity support dropped

Fs-verity is a feature of the Linux kernel that Android uses to continuously verify the integrity of APK files using trusted digital certificates. Fs-verity support was initially introduced with Android-specific kernel patches alongside the Pixel 3’s kernel release. The feature was later upstreamed to the Linux kernel and merged in version 5.4 before being backported to Android Common Kernel branches 4.14 and higher. In the process, the API was changed, leaving two different implementations: the legacy fs-verity API and the standard fs-verity API.

Android’s legacy API for fs-verity has been dropped in Android 13. It is no longer possible to use the legacy fs-verity implementation, thus devices with the system property ro.apk_verity.mode set to ‘1’ will need to migrate to the standard fs-verity implementation. GMS requirements already mandate that OEMs build their kernels with CONFIG_FS_VERITY enabled when shipping devices with Android 11 or later, so most devices upgrading to Android 13 should not be affected by this change. In fact, new devices using ACK 4.14 or higher and EXT4/F2FS for the userdata partition automatically come with support for fs-verity.

Memory Tagging Extension for Armv8.5+ devices

Mistakes with pointers in C or C++ that cause memory to be misinterpreted, ie. memory safety problems, are some of the most severe bugs encountered by software engineers. The Google Chrome team found that nearly 70% of its serious security bugs are memory safety problems, so they’ve been working to secure Chrome by addressing these problems in three ways. They have been working to implement compile-time and runtime checks to make sure that pointers are correct, and they’ve also explored using memory-safe languages like Rust to write parts of their codebase.

Memory safety bugs also represent a large proportion of high severity security vulnerabilities in the Android platform, which is why Google has been using HWASan to find memory issues, has been writing parts of Android in Rust, and has been preparing to support Memory Tagging Extension (MTE) throughout the Android software stack.

MTE is a hardware feature of Arm v8.5+ CPUs that mitigates memory safety bugs by providing more detailed information about memory violations. It has low CPU overhead so it can always run without significantly affecting the performance.

With the first batch of SoCs with Armv9 CPUs now on the market, Google is adding a new setting in the Developer Options of Android 13 that toggles software support for MTE. The toggle, called “reboot with MTE”, is hidden by default but can be surfaced by the OEM if they set ro.arm64.memtag.bootcl_supported to true. This property can be set by OEMs that don’t want to enable MTE by default yet but want to offer users a preview that can be manually enabled.

After enabling MTE, a message will appear that reads as follows: “System will reboot and allow to experiment with Memory Tagging Extension (MTE). MTE may negatively impact system performance and stability. Will be reset on next subsequent reboot.”

Privacy Sandbox

As third-party cookies are being phased out on the web, Google is evaluating how digital advertising can also be reworked on Android. In February, Google announced a multi-year initiative to build the “Privacy Sandbox” on Android. The goal is to introduce new, more private advertising solutions that limit sharing of user data with third parties and work without cross-app identifiers, including the advertising ID provided by Google Play Services. Google also wants to reduce covert data collection from advertising SDKs integrated into apps.

The Privacy Sandbox is comprised of multiple projects: the Topics API, the SDK Runtime, the Attribution Reporting API, and FLEDGE on Android. The Topics API is an implementation of interest-based advertising (IBA), a form of personalized advertising that selects ads based on the user’s interests derived from the apps the user has engaged with in the past. The SDK Runtime is a platform capability that allows third-party SDKs to run in a dedicated runtime environment, isolating them from the sandbox of the application using the SDK. The Attribution Reporting API helps advertisers measure the performance of their campaigns without using cross-party identifiers. Lastly, FLEDGE enables remarketing and custom audience targeting without sharing identifiers across apps or a user’s app interaction information with third-parties.

Since the Privacy Sandbox is a multi-year initiative, the APIs and services it offers are bound to change over the coming months. Furthermore, we may not even see the first public release of these APIs with the stable release of Android 13 later this year. However, to give developers an opportunity to try these APIs and share feedback, Google is maintaining a separate developer preview for the Privacy Sandbox on Android.

The first of these developer previews was released in April, and it brought a preview of the Topics API and SDK Runtime. Device system images were made available for the Pixel 4 through Pixel 6, as well as an Android SDK, 64-bit Android Emulator system image, and code samples. Apart from the addition of the Privacy Sandbox features, these images are identical featurewise to the Android 13 beta builds.

The following sections provide a summary of each project within the Privacy Sandbox on Android initiative.

Topics API

The Topics API is aimed at providing advertisers coarse-grained interest signals (called topics) that are derived from a user’s app usage. Topics are human-readable interest signals that are predefined by humans and number somewhere between a few hundred and a few thousand. The taxonomy will be tailored to the types of ads that can be shown in Android apps, but the initial list is not available as of early May. 

Google is training a classifier model on publicly available app information (such as app names, descriptions, and package names) to derive topics of interest. This model uses signals such as apps installed or recently used to compute topics of interest on-device. The system will use this model to compute the user’s top 5 topics once every epoch (the period of time when topics are computed). 

Apps that call the Topics API may be given a list of up to 3 topics, 1 from each of the past 3 epochs. Google says that providing up to 3 topics ensures that frequently used apps will learn at most 1 new topic each epoch, while infrequently used apps will still have enough topics to find relevant ads. Because the system assigns one of several topics to each app that invokes the API, it’s difficult for two apps to correlate information with a specific user since different apps get different topics. The Topics API will only return topics that the caller has observed in the past, however.

Apps will be able to opt out of the Topics API through new manifest elements, and users will be able to view and remove topics that are associated with their app usage. Neither of these have been implemented yet, however.

Google says that the topics API implementation and its usage of the classifier model will be made available in AOSP. The classifier model itself will be freely available to apps that want to test to see what topics their apps classifies to.

SDK Runtime

Currently, SDKs that are bundled with an app are executed within the host’s app sandbox. This gives them the same privileges and permissions as their host app and also lets them access the host app’s memory and storage. Unscrupulous SDKs have used this to their advantage to collect and share user data unbeknownst to the user or developer. In order to combat this, Android 13 enables support for running select third-party SDKs in a dedicated runtime environment called the SDK Runtime. 

Compatible SDKs — referred to as runtime-enabled (RE) SDKs — operate in an isolated process and communicate with apps via well-defined permissions and APIs. SDKs in the SDK Runtime will by default have access to permissions commonly used by ads-related SDKs, such as INTERNET and AD_ID. They will also be granted permissions to access the new privacy-preserving APIs that provide core advertising functionality without the need for cross-app identifiers.

Runtime-enabled SDKs aren’t statically linked and packaged with apps under this design. Instead, SDK developers upload their versioned SDKs to app stores and app developers specify their dependencies by version. When the user downloads an app, the installer downloads the app’s specified dependencies from the app store.

Currently, the SDK Runtime is designed to support advertising-related SDKs. Like other projects in the Privacy Sandbox, the SDK Runtime is under active development, so Google is still seeking feedback on its design. The design doc goes into more detail on the changes to access, execution, communication, development, and distribution that SDK developers need to be aware of.

Attribution Reporting

Google’s proposed Attribution Reporting API provides support for the following features: conversion reporting, optimization, and invalid activity detection. The API helps advertisers measure the performance of their ad campaigns by showing them conversion counts and values across campaigns, ad groups, and ad creatives. The API provides per-impression attribution data that can be used to train ML models, thus allowing advertisers to optimize ad spend. Lastly, the API can provide reports to analyze invalid traffic and ad fraud.

The Attribution Reporting API supports the aforementioned use cases while improving privacy over existing mobile attribution and measurement solutions that use cross-party identifiers. It does this by limiting the number of bits available for event-level reports, enabling higher-fidelity conversion data in aggregatable reports only, rate limiting available conversions and the number of ad tech platforms that can be associated with a single attribution source, and incorporating various noise adding techniques.

In order to use the Attribution Reporting API, ad tech platforms must first complete an enrollment process. Then, they must register attribution sources and conversions. The API will then match conversions to attribution sources and send conversions off-device through event-level and aggregatable reports to ad tech platforms.

FLEDGE on Android

FLEDGE, short for First “Locally-Executed Decision over Groups” Experiment, is another web API that is being adapted to Android. It’s designed for advertisers who seek to serve ads to potentially-interested users, ie. users who previously interacted with the advertiser’s app. 

FLEDGE encompasses two APIs: the custom audience API and the ad selection API. The custom audience API lets apps or SDKs create and use a custom audience representing a group of users with common intentions or interests. Audience information is stored locally on the device, limiting the sharing of user information. The ad selection API orchestrates auction execution for ad tech platforms. Ad tech platforms are expected to write Javascript code implementing the buy-side bidding logic, buy-side ad filtering and processing, and sell-side decision logic, which are run sequentially on-device.

Flow chart showing the custom audience management and ad election workflow. Source: Google.

Remote Key Provisioning

The Android Keystore API lets apps store cryptographic keys in a container that can later be used for cryptographic operations. The key may be stored in a software or hardware-backed container; the latter is far more secure as the key material is never exposed outside of the device’s secure hardware. When assembling a device, OEMs request a Google-signed attestation private key that is provisioned to the device’s secure hardware before leaving the factory.

If an attacker somehow manages to compromise that key, whether via a leak from the factory or a vulnerability in the secure hardware, it would need to be revoked so it can’t be used by other devices. Doing so would prevent potentially tens of thousands of users of the same device from accessing many features, since a single key is often used to provision many devices and hardware-backed key attestation is employed by a range of apps and services that require security, such as SafetyNet Attestation, Identity Credential, Digital Car Key, and more. In order to address these issues, Google is revising its attestation infrastructure to add support for Remote Key Provisioning, which will be mandatory for Android 13 devices.

Under the new Remote Key Provisioning scheme, OEMs will no longer provision attestation private keys in the factory. Instead, a unique, static public/private keypair is generated by each device at the factory, and the OEM extracts the public portion of the keypair and submits it to Google. The public keys serve as the basis of trust for provisioning later, while the private key never leaves the secure hardware where it’s generated. When the device is powered on and connected to the Internet, it sends a certificate signing request to Google, signed with the private key in the secure hardware. Google verifies the authenticity of the request by looking up the public keys it stored earlier, and if it’s verified, a temporary attestation certificate is sent to the device. Keystore then assigns these certificates to apps requesting attestation. When the attestation certificate expires, the process repeats.

How Remote Key Provisioning works on Android
How Remote Key Provisioning works. Credits: Google.

Google says that Remote Key Provisioning is “privacy preserving” because each application receives a different attestation key, the keys themselves are regularly rotated, and backend servers are segmented so the server verifying the public key does not see the attached attestation keys. While these changes won’t impact end users, developers that rely on hardware-backed key attestation will need to be aware that the certificate chain length is longer than before, the root of trust will use an ECDSA key instead of an RSA key, that RSA key attestation will be deprecated, and that certificates will generally be valid for up to two months before they’re rotated.

Shared UID migration

Android isolates app processes from one another by assigning a unique user ID (UID) to each application at installation. This is a key part of the application sandbox, which is one of Android’s core security features. However, apps signed by the same key can have a shared user ID which enables them to access each other’s data and run in the same process, letting them communicate directly instead of via IPC. This is achieved by setting the android:sharedUserId element to be the same in the manifest of both apps.

Google highly discourages use of this feature and in Android 10, they deprecated the android:sharedUserId constant. However, Android doesn’t support migrating off a shared user ID, so existing apps that utilize the feature cannot remove the constant from their manifests. This will change in Android 13, which will provide apps a way to migrate off a shared UID.

Although the exact migration mechanism hasn’t been documented, the Android developer documentation mentions shared UID migration in several classes. Apps can define a sharedUserMaxSdkVersion which is the maximum SDK version for which the app will remain in the UID defined in android:sharedUserId. The ACTION_PACKAGE_REMOVED, ACTION_UID_REMOVED, and ACTION_PACKAGE_ADDED intents sent by the system will include the new boolean extra field EXTRA_UID_CHANGING to indicate that the package is changing its UID, which happens when a package is leaving sharedUserId in an upgrade. If so, the ACTION_PACKAGE_REMOVED intent may also contain the new EXTRA_NEW_UID field supplying the new UID the package will be assigned.

Virtualization support for isolated compilation

Android 13 includes the first release of the pKVM hypervisor and virtual machine framework. Google’s goal is to de-privilege and isolate third-party code (such as third-party code for DRM and cryptography) from Android by having it execute in a virtual machine at the same privilege level as the OS and kernel rather than at a higher level.

To accomplish this, Google has chosen to deploy KVM as the common hypervisor solution (pKVM is simply KVM with additional security features) and crosvm as the virtual machine manager. pKVM is enabled through the kernel, while crosvm is shipped as part of the new Virtualization (com.android.virt) Mainline module. Google has been using the Pixel 6 to test pKVM and the Virtualization module, but prior to the Android 13 release, neither was enabled in production builds. Starting with Android 13, however, the Pixel 6 series ships with the Virtualization module as well as KVM support out of the box. This allows the devices to securely boot operating systems in a virtual machine.

The Virtualization module contains images for a lightweight and headless build of Android called “microdroid” which is used to execute targeted payloads. Microdroid is currently used for “isolated compilation” of boot and system_server classpath JARs. This logic is handled by the new CompOS module (com.android.compos) which manages isolated compilation. Further uses of virtualization in Android 13 have yet to be implemented.

For more information on virtualization in Android 13, refer to this article. For a guide on how to use crosvm on the Pixel 6 series, refer to this article.

Head tracking sensor support

Google began work on implementing spatial audio with dynamic head tracking support in Android 12L, but full support for the feature has been added to Android 13. Android’s Sensor class has added a new constant for head tracking sensors, STRING_TYPE_HEAD_TRACKER or android.sensor.head_tracker. On devices that declare head tracking support, denoted by the feature android.hardware.sensor.dynamic.head_tracker, the SpatializerHelper class can initialize head tracking sensors denoted by a UUID reported by a connected Bluetooth A2DP device. The head tracking mode can then be set to one of the supported modes: STATIC (no head tracking), RELATIVE_WORLD (no screen tracking), or SCREEN_RELATIVE (full screen-to-head tracking). The device must ship with a spatializer effect that can use head tracking in order for head tracking functionality to be initialized and the corresponding APIs to be active.

Head tracking combines accelerometer and gyroscope data to determine the rate of rotation and orientation of a user’s head relative to an arbitrary reference frame. Android provides head tracking sensor event data in Euler vector representation, with the direction indicating the axis of rotation and magnitude indicating the angle to rotate around that axis. The axes of this coordinate frame are centered around the head, with the X axis crossing the user’s ears, the Y axis crossing the back of the user’s head through their nose, and the Z axis crossing from the neck through the top of the user’s head.

Because certain head movements are physically impossible, accelerometer and gyroscope data will be restricted to certain axes. Android 13 adds new Sensor.TYPE_ACCELEROMETER_LIMITED_AXES and Sensor.TYPE_GYROSCOPE_LIMITED_AXES to denote these sensor types where one or two axes are not supported.

When a head tracking device is removed and then put back, the reference frame may have significantly changed, causing a discontinuity. The new firstEventAfterDiscontinuity field will be set to true when this happens, so apps can be aware of the sudden and significant change in the reference frame.

Heading sensor support

Heading sensors are used to provide the direction the device is pointing to relative to true north. These sensors combine data from accelerometers and geomagnetic field sensors to determine the direction. Since Android already supports these two sensors, it is already possible to determine the direction of a device relative to true north. Android 13, however, adds support for a new composite sensor of TYPE_HEADING, which directly provides the heading without additional calculations.

The TYPE_HEADING sensor returns values between 0.0 and 360.0, with 0 indicating north, 90 east, 180 south, and 270 west. It also returns the accuracy in degrees. The accuracy is defined at a 68% confidence. If the heading returns 60 degrees with an accuracy of 10 degrees, then there’s a 68 percent probability that the true heading is between 50 and 70 degrees, ie. within one standard deviation of the mean assuming a normal distribution.

Smart idle maintenance

Android 13 adds a smart idle maintenance service, which intelligently determines when to trigger filesystem defragmentation without hurting the lifetime of the UFS storage chip.

Smart idle maintenance can be manually run through the ‘sm’ shell command:

sm idle-maint [run|abort]

USB HAL 2.0 with support for limiting power transfer and audio docks

Google is updating Android’s USB HAL to version 2.0, introducing several new features. First, the new enableUsbData API lets system apps toggle USB data on a specific port rather than in all ports. Another addition, which is not present in AOSP at the moment,  is the limitPowerTransfer API. When this API is invoked, power transfer is limited to and from the USB port. This behavior is limited when the USB service detects the USB has been disconnected. Next, the new enableUsbDataWhileDocked API enables data transfer over USB while the device is docked. Lastly, support for USB digital audio docks has been introduced, represented by the device type DEVICE_OUT_DGTL_DOCK_HEADSET. These system APIs are guarded by the MANAGE_USB permission, which has the signature|privileged protection levels.

These system APIs, along with other new features in Android 13, are likely intended for tablets that can be docked.


What are the new APIs in Android 13?

LED flash brightness control

The vast majority of Android smartphones and tablets have at least a single rear-facing camera accompanied by a LED flash module. Though Android supports toggling the LED flash on or off through an API, it doesn’t provide an API for modulating the brightness. This, however, will change in Android 13.

The Android 13 release includes new methods in the CameraManager class that let apps get and set the torch strength level. Only devices that report a value greater than 1 when apps use CameraCharacteristics.FLASH_INFO_STRENGTH_MAXIMUM_LEVEL will support programmatically setting the brightness of the flashlight. OEMs will need to implement a new version of the camera device HAL in order to add support for this API. Because of Google’s vendor freeze requirements, however, support for this feature will likely be very limited among devices that upgrade to Android 13.

For more information on Android 13’s new LED flash brightness control API, please refer to this article that goes more in-depth.

Block users from adding new WiFi networks

Enterprises can now block users of fully managed devices from adding a new Wi-Fi network in Android 13. Device or profile owner apps can add the new UserManager.DISALLOW_ADD_WIFI_CONFIG restriction to hide the “add network” option in Internet settings and make Settings reject any requests to add a Wi-Fi network.

WiFi SSID policy

Android 13 will let enterprises configure an allowlist or denylist of Wi-Fi SSIDs that the device can connect to. The new WifiSsidPolicy API lets device admins set a restriction policy that the network must satisfy. If the policy type is a denylist, then the device cannot connect to any networks on the list. If the policy type is an allowlist, then the device can only connect to networks on the list. Networks configured by the admin are not exempted from the restriction policy set by this API. Furthermore, this API does not prevent users from adding a network that is present on the denylist or missing from the allowlist – if they do so, the network will simply be disconnected after being added.

Color vector fonts

Android 13 can render COLR version 1 fonts, which is a new and highly-compact font format that supports color gradients. The system emoji have also been updated to the COLRv1 format. Android will handle rendering text using COLRv1 for most apps, but for apps that implement their own text rendering using the system’s fonts, Google recommends at least testing how emojis render. For more information on COLRv1, check out the announcement on the Chrome blog.

Stylus handwriting

Android 13 introduces an API for the current input method (ie. keyboard) to receive stylus events when an editor is focused. To test this behavior, developers can enable the new “Stylus handwriting” setting in developer options. Android will check if this developer option is enabled before calling InputMethodManager.startStylusHandwriting to start the stylus handwriting session on the given View. Input methods that declare support for stylus input should show an inking window on ACTION_DOWN events to let the user perform handwriting input.

Developer option to enable stylus handwriting in Android 13

Text conversion APIs

Languages that use phonetic alphabets, such as Japanese and Chinese, are getting improvements to search speeds and auto-completion in Android 13. Apps can use the new text conversion API to convert characters between phonetic alphabets. This will, for example, make it so Japanese users can type search queries in Hiragana and immediately see results in Kanji.

Framerate interventions

Android’s Game Mode, first introduced in Android 12, now supports setting the FPS that a game should run at. By setting the attribute allowGameFpsOverride to true, developers can opt in to FPS override interventions. Developers can override the FPS of their game in Android 13 through the CLI for Game Mode:

cmd game set --fps [30|45|60|90|120|disabled]

Loading time improvements via GameState hints

Android’s GameManager API has added a method called setGameState that lets games communicate the current state of the game to the platform. Games can pass the top-level state of the game, indicating if the game can be interrupted or not. Games can also tell the platform if it’s loading something (assets/resources/compiling/etc.), which can pass a hint to the power HAL to boost CPU performance to improve loading times. The loading time hint is part of the new power HAL version which adds the GAME_LOADING mode to the Mode.aidl file; device makers should configure the powerhint.json file to specify the CPU performance tuning that should be done when the GAME_LOADING mode is active.

Google plans to add a test to VTS that enforces GAME_LOADING mode for all devices which ship with Android 13 or later, but we do not know if this requirement is final. Due to the Google Requirements Freeze (GRF) program, it’s possible that many devices upgrading to Android 13 will not include the updated power HAL version with the new GAME_LOADING mode.

Programmable shaders

The Android Graphics Shading Language (AGSL) is derived from the OpenGL Shading Language (GLSL) but is designed to work within Android’s rendering engine to customize painting within Android’s canvas and filter View content. Android internally uses RuntimeShaders, with behavior defined using AGSL, to implement blur, ripple effects, and stretch overscroll in previous versions of Android. With Android 13, developers can create advanced effects of their own using programmable RuntimeShader objects.

Schedule exact alarms without user prompts

Android 13 adds the USE_EXACT_ALARM permission that lets apps freely schedule exact alarms without prompting the user, much like the SCHEDULE_EXACT_ALARM permission introduced in Android 12. The SCHEDULE_EXACT_ALARM, however, can be revoked by the user at any time through Settings → Apps → Special app access under the “Alarms & reminders” page, as it is an “appop” permission. USE_EXACT_ALARM, on the other hand, only has a protection level of normal, hence it is granted at install time and cannot be revoked without uninstallation.

However, Google warns that “app stores may enforce policies to audit and review the use of this permission” since this permission was introduced “only for apps that rely on exact alarms for their core functionality.”  The company specifically says that an upcoming Google Play policy will prevent apps from using the USE_EXACT_ALARM permission unless they’re an alarm app, a clock app, or a calendar app that shows notifications for upcoming events.

Anticipatory audio routing

Android 13 introduces new audio route APIs to AudioManager that media apps can use to anticipate how their audio will be routed. Apps can use the new getAudioDevicesForAttributes() API to retrieve the list of devices that may be used to play the specified audio track based on the provided audio attributes. The getDirectProfilesForAttributes() API helps determine if the audio stream can be played directly on those devices, ie. without resampling or downmixing.

More granular media file permissions

Android 10 introduced the concept of “Scoped Storage” to restrict applications’ access to files on external storage directories. One of the biggest changes introduced with Scoped Storage is the restriction of what files can be accessed if an app holds Android’s READ_EXTERNAL_STORAGE permission. Starting with Android 11, apps holding the READ_EXTERNAL_STORAGE permission aren’t granted broad access to the external storage but rather are given access to media files owned by other apps residing in well-defined media collections (MediaStore.Images, MediaStore.Video, and MediaStore.Audio; or MediaStore.Files).

In an effort to improve transparency and provide more control to users, Android 13 makes media file access even more granular. Apps targeting Android 13 must now request individual permissions to read audio (READ_MEDIA_AUDIO), video (READ_MEDIA_VIDEO), or image files (READ_MEDIA_IMAGES). If an app requests permissions to read video and image files at the same time, the system will combine the permissions dialog for both. Apps targeting Android 12 or lower must continue to request READ_EXTERNAL_STORAGE to access media files owned by other apps, however. Alternatively, apps can interact with the system document picker app or the new system photo picker for files and photos respectively in order to retrieve user-selected files/photos without the need for any permissions.

OpenJDK 11 support

Google has been recently experimenting with building Android with Java 11 as the default version, and the company says that they plan to not only refresh Android’s Core Libraries in Android 13 but that these changes will be backported to Android 12 devices through an update to the ART module. This means that Android’s Core Libraries will align with the OpenJDK 11 LTS release, bringing both library updates and new programming language features for app and platform developers.

Quick Settings Placement API

In Android 7.0 Nougat, Google introduced the TileService API to let apps add their own custom tiles to the Quick Settings. However, in order to add a tile from a third-party app to the Quick Settings, the user needs to pull down the notifications shade, tap the Quick Settings edit button (usually taking the form of a pencil icon), and then scroll down to find the tile they want to add. While third-party apps have numerous ways to inform users about the existence of their custom tiles, users still need to manually add the tile.

An example dialog asking the user to place a tile in the Quick Settings through Android 13's new Quick Settings Tile Placement API
A screenshot of a sample app using the new tile placement API to prompt a user to add a tile to the set of active Quick Settings tiles. Source: Google.

Starting in Android 13, however, a new tile placement API will let apps prompt users to directly add their custom tile to the set of active Quick Settings tiles. When an app calls this API, a system dialog will appear that lets the user add the tile in a single tap. This will make it easier for users to discover your app’s custom Quick Settings tiles.

Computer and App Streaming device profiles

Google introduced the Role API in Android 10 to grant multiple, often unrelated permissions to apps based on the type of role the app fulfills. For example, apps holding the DIALER role are automatically granted permissions related to phone calling, contacts, messaging, and microphone. Since roles give access to a wide array of (often sensitive) permissions, and it’s not always necessary for apps to need those permissions at all times, Google devised a way for the system to temporarily grant a role.

In Android 12, Google added the Companion Device Manager (CDM) profiles feature to make it easier for apps to request and be granted access to the requisite permissions needed to manage a smartwatch. Under the hood, a system app called Companion Device Manager grants the COMPANION_DEVICE_WATCH role to apps that request it, giving those apps access to permissions they need to sync phone status and data with a smartwatch. Users only see a single permissions dialog instead of multiple, saving time and reducing friction. When the user resets their smartwatch, causing the association between the device and the smartwatch to be lost, the Companion Device Manager app revokes the role until the next time an association is made.

Smartwatches aren’t the only “companion” devices where this flow can be applied to simplify setup. Recognizing this, Google has created the new COMPANION_DEVICE_COMPUTER and COMPANION_DEVICE_APP_STREAMING roles in Android 13. The first role grants the permissions needed to access notifications, recent photos, and recent media, while the second role grants the permissions for creating a virtual display (where apps can be launched and then streamed to a PC). Only system apps can hold these roles, however, as the underlying permissions have a system|signature protection level.

COMPANION_COMPUTER_DEVICE role
The role definition for COMPANION_COMPUTER_DEVICE
COMPANION_DEVICE_APP_STREAMING role
The role definition for COMPANION_DEVICE_APP_STREAMING

For a more detailed breakdown on these new roles and the Role API in general, refer to this article.

Background access of body sensors requires new permission

Android has long allowed applications to access data from sensors that measure the heart rate, temperature, or blood oxygen levels of the body. This data can only be accessed by applications that hold the BODY_SENSORS permission, which has a protection level of “dangerous”. Until Android 13, applications that held this permission could access body sensor data while in the background. Android 13 changes this by adding a new permission called BODY_SENSORS_BACKGROUND. Apps holding the BODY_SENSORS permission on Android 13 will only have access to body sensor data while the app is in use. In order to access body sensor data while in the background, apps must hold both the BODY_SENSORS and BODY_SENSORS_BACKGROUND permissions.

The new BODY_SENSORS_BACKGROUND permission also has a protection level of “dangerous”, but unlike the BODY_SENSORS runtime permission, BODY_SENSORS_BACKGROUND is hard restricted. This means that the PackageInstaller has to allowlist the permission while installing the app so it can later be granted by the user.

Developer downgradable permissions

Applications need permissions to access many of Android’s APIs, but they may not necessarily need persistent access to those APIs. However, once they’ve been granted that permission — either at install-time or at runtime — they’ll retain that permission until the user uninstalls the app, the user manually revokes the permission, or the system automatically revokes the permission when the app enters hibernation.

In Android 13, Google has added a new API that enables developer downgradable permissions. Apps can trigger the revocation of one or more runtime permissions granted to the package calling the API. Apps that don’t need access to certain runtime permission-gated APIs can self-revoke those permissions so users can be assured those apps aren’t using those APIs without their knowledge.

Disable the screenshot shown in the recents overview

Android 13 introduces the setRecentsScreenshotEnabled API so developers can tell the system to never take a screenshot of an activity for use as a preview in the recents overview. This differs from the FLAG_SECURE window flag in that it only applies to screenshots the system takes for the recents overview — it does not block screenshots taken by the user or the Assistant.

Nearby device permission for Wi-Fi

Because a device’s location can be inferred by tracking nearby Wi-Fi APs and Bluetooth devices, Google decided to prevent apps from accessing Bluetooth or Wi-Fi scan results unless those apps hold location permissions. It made sense for Google to gate these features behind location permissions given that they could be used to derive a user’s physical location, but it resulted in confusion from users who believed that their apps were tracking their location, because both ACCESS_COARSE_LOCATION and ACCESS_FINE_LOCATION are “dangerous” (ie. runtime) permissions that require post-install user consent to be granted.

To reduce confusion, Google introduced new BLUETOOTH_SCAN, BLUETOOTH_CONNECT, and BLUETOOTH_ADVERTISE permissions under the NEARBY_DEVICES permission group in Android 12. These permissions can be requested by apps that need to interact with Bluetooth, and when one or more of them are requested by the app, the system prompts the user to allow the app access to “nearby devices”. An optional Manifest attribute called “neverForLocation” lets the app strongly assert that it won’t derive physical location.

In Android 13, Google is similarly decoupling Wi-Fi scanning from location. Android 13 introduces the new NEARBY_WIFI_DEVICES runtime permission under the NEARBY_DEVICES permission group. This permission should be requested by apps that need to manage a device’s connections to nearby Wi-Fi APs and will in fact be required to call many commonly used Wi-Fi APIs. The optional Manifest attribute “neverForLocation” will let developers strongly assert that their app won’t derive physical location from Wi-Fi scan results.

Non-dangerous permission to read the phone state

By holding the READ_PHONE_STATE permission, Android apps can read the current cellular network information, status of any ongoing calls, and a list of PhoneAccounts registered on the device. This information may contain sensitive information, so the READ_PHONE_STATE permission has a protection level of “dangerous” and hence must be granted by the user at runtime. For apps that only need to determine the cellular network type, Android 13’s new READ_BASIC_PHONE_STATE permission provides a “non dangerous” alternative. This permission has a protection level of “normal”, hence it is granted by the system at install time.

Runtime permission for notifications

Unlike with other APIs, apps by default can post notifications without requesting any permission. Notifications are the key way for Android apps to interact with users outside of the app, so it makes sense why Google didn’t gate them behind a permission check.

While most apps utilize notifications to post useful alerts and reminders, some apps misuse notifications to send unsolicited advertisements. Android does let users turn off notifications on a per-app and per-channel basis through an interface in Settings, however, this approach has multiple problems. By making notifications opt-out rather than opt-in, and putting the settings to opt-out behind several layers in Settings, most users will keep the default notification settings. Developers and marketers who send notifications to reengage users with their apps and services will find this valuable, but if too many apps post notifications, their importance will be reduced and they’ll feel overwhelming to the user.

That’s why in Android 13, Google has reworked the notification contract between apps and the Android OS. In Android 13, Google has added a runtime permission for notifications. However, in order to not be disruptive to users and developers, notification access in Android 13 is handled differently depending on the target API level of the app that’s being run. Regardless of an app’s target API level, however, Android 13 will prompt the user to grant an app permission to send (non-exempt) notifications.

Here’s how Android 13 handles notifications access based on an app’s target API level:

  • If a newly installed app’s target API level is…
    • 33, the app needs to declare the android.permission.POST_NOTIFICATIONS permission in its Manifest. This permission has a protection level of “dangerous” and hence apps are required to show a runtime prompt to the user in order to be granted the permission. Packages that have not been granted the permission will have their notifications silently dropped by the system.
    • 32 or lower, the system will show the permission dialog when the app creates its first notification channel.
  • If an existing app’s target API level is…
    • 33, the system temporarily grants the app permission to send notifications until the first time an activity in the app is launched. The app must have had an existing notification channel and its notifications must not have been explicitly disabled by the user.
    • 32, the system temporarily grants the app permission to send notifications until the user explicitly selects an option in the permission dialog. The temporary grant persists if the user dismisses the permission dialog before making a choice.

The permission dialog for the new notification permission is structured like other dialogs for runtime permissions. If the user…

  • selects “Allow,” then the app can send notifications through any channel and post notifications related to foreground services.
  • selects “don’t allow,” then the app cannot send notifications through any channel, except for a few specific roles.
  • swipes the dialog away, then the app can only send notifications if the system has a temporary grant.

MediaStyle notifications are exempt from Android 13’s notification runtime permission.

This change puts Android in line with iOS, which also requires users to opt-in to notifications from apps. Developers of Android apps will now need to put in effort to convince users to turn on notifications. Developers are encouraged to request the notification permission in context, ie. prompt the user only after explaining why the app needs the permission. Once the app has been granted permission, developers should use the permission responsibly, as users can at any time revoke the permission. Apps can check if the user has enabled notifications by calling the areNotificationsEnabled() method of NotificationManager.

Users can opt out of Android 13’s runtime permission dialog for existing apps targeting API level 32 or lower by changing the value of Settings.Secure.notification_permission_enabled from ‘1’ to ‘0’.

Safer exporting of context-registered receivers

Android 12 required app developers to explicitly declare whether any activity, service, or broadcast receiver with intent filters statically defined in the app’s Manifest file should be exported or not. Google asked app developers to carefully consider whether they wanted to expose their manifest-declared intent receivers to other apps, and in Android 13, they’re doing the same for context-registered receivers as well.

Developers that dynamically register broadcast receivers in their apps should add either the RECEIVER_EXPORTED or RECEIVER_NOT_EXPORTED flag. This way, developers can decide if they want their receivers to be available for other apps to send broadcasts to. Google isn’t requiring that apps targeting Android 13 utilize this feature, but they highly recommend it as a security measure.

Ambient Context events

A new framework API called “Ambient Context” has been added to Android 13, but it is currently undocumented. Android is providing a client API that apps can subscribe to to receive notice of AmbientContext events such as coughing (EVENT_COUGH) and snoring (EVENT_SNORE). The API also provides apps with information on the start and end time of detected events, the confidence that the detected event is accurate, and the intensity level of the event (ranging from LEVEL_LOW to LEVEL_HIGH). All of this data is provided by a service in a system app that implements the provider API, which only the system can bind to as the service should be gated behind the new BIND_AMBIENT_CONTEXT_DETECTION_SERVICE permission. Furthermore, only client apps that hold the new ACCESS_AMBIENT_CONTEXT_EVENT permission can access data provided by the Ambient Context API. According to Android’s Privacy Working Group (PWG), this permission will switch from a Role to a runtime permission in Android 14.

The system service that implements the provider API is defined in the framework config value ‘config_defaultAmbientContextDetectionService.” On Pixel devices, this value is defined as ‘com.google.android.as/com.google.android.apps.miphone.aiai.labs.ambientcontext.AiAiAmbientContextDetectionService’ which points to a service that doesn’t exist in public versions of the Android System Intelligence app. If this service were present, then to enable it, the device_config value ‘service_enabled’ under the ‘ambient_context_manager_service’ namespace would also need to be set to ‘true’. Then, the new ‘ambient_context’ CLI could be used to start or stop detection or query events.

Based on our understanding, it seems that Google is providing an interface for the system intelligence app (Android System Intelligence on Pixel devices) to detect sleeping-related events and then privately share those events with apps subscribing to the client API. This way, client apps that just need sleep data won’t also need the raw sensor data (such as continuous microphone usage) needed to detect sleep events. This will enable apps to implement sleep detection features in a privacy-preserving way, in line with the API updates Google has making as part of its Private Compute Core initiative.

Cross device calling

While every smartphone can make phone calls, the same isn’t true for every tablet. According to the Google Play Console’s device catalog, only about 40% of tablets support telephony (android.hardware.telephony). Some tablet makers like Samsung offer a cross-device calling feature so users can make and receive calls on their tablet using the telephony service from a connected phone. Google is introducing similar APIs in Android 13 that will enable calls to be forwarded from a smartphone to a tablet or other device.

The new API is called “cross device calling” and has already been partially merged to AOSP. Android’s Telecom framework now supports pushing calls to remote endpoints, which can either contain a complete calling stack capable of carrying out a call on its own or lack the required calling infrastructure to carry out a call on its own. A cross device call streaming app can interface with the telecom stack to share updates about the status of the call at the endpoint.

Calls that are routed to endpoints that lack the required calling infrastructure are considered “tethered” external calls. Since tethered devices can’t carry out the phone call on their own, the audio stream from the phone is re-routed to the device using Android 13’s new external call audio routing API.

Splash screen style

Android 12 introduced system-generated splash screens for application launches. These splash screens can be minimally customized by developers, and in Android 13, Google has added a new API to declare the splash screen style. The new windowSplashScreenBehavior API can be used by apps to declare that they prefer to show an icon-style splash screen. This attribute only takes effect if the activity isn’t started with the flag SPLASH_SCREEN_STYLE_SOLID_COLOR, which removes the icon from the splash screen.

Themed Icons API

Google introduced the third major version of their Material design language alongside the Android 12 release last year. One of the key features of Material You — the marketing name of Google’s updated design language — is dynamic color. Dynamic color exposes 5 dynamic color tonal palettes, each comprised of 13 color values with various luminance values, as an API that system and third-party apps can call. Apps can follow the Material guidelines for dynamic color or their own design language when deciding how to use the color palettes to theme their own UIs. Since the dynamic color tonal palettes are generated from a single source color, which is usually picked from the user’s wallpaper, the resulting theme that’s applied across system and third-party apps can vary widely and feel personalized to the user.

An app’s UI isn’t the only area where dynamic color can be used. Widgets can also be recolored, as can some app icons in Android 12 on Pixel devices. In Android 12 on Pixel, Google introduced an experimental “themed icons” feature in their Theme Picker app. When enabled, dynamic colors are applied to various Google app icons whenever the wallpaper is changed. However, Google hardcoded a list of themeable icons within a XML file called grayscale_icon_map contained within the launcher, which also contains the drawable resources for the monochrome app icons.

In Android 13, Google is extending Material You dynamic color to all app icons. Google has updated the AdaptiveIconDrawable API to support themed app icons. Developers need to simply supply a monochromatic app icon and tweak the element in ic_launcher.xml to include the new inner element that points to the monochromatic drawable. Developers that have already supplied an adaptive icon will find it easy to add support for themed icons in Android 13.

Google says that themed app icons will initially appear on Pixel devices, but the company is working with its partners to bring them to more devices.

The feature can be enabled on Launcher3 by setting both the preference KEY_THEMED_ICONS and the feature flag ENABLE_THEMED_ICONS to true.


Miscellaneous changes

This section contains changes for Android on handheld devices that weren’t deserving of their own sections.

  • When enabling freeform mode or force desktop mode in Developer Options, a dialog now informs the user that they need to reboot before it’ll work.
  • The “show touches on screen” toggle in SystemUI’s screen recorder is functional. This toggle was removed in Android 12L due to a bug with how the cursor is drawn.
  • A new animation has been added to cleanly transition between the smartspace widget on the lock screen and the smartspace widget on the home screen. Smartspace is a proprietary Google widget, but a basic form of it is available to Android partners. This video clearly shows the new animation. I’ve set the animation scales to 5X to lengthen the animations so the smartspace shared element transition is more visible.
  • Android supports creating restricted profiles that are limited in what apps they can launch and content they can view. By default, the ability to create a restricted profile is only available on devices without telephony capabilities. Android 13 replaces this check with a new config flag in Settings (config_offer_restricted_profiles).
  • The USB debugging icon has been updated to reflect the Android T update.
The new icon for USB debugging in Android 13.
  • “Emergency call” on the lock screen has been changed to “Emergency.”
  • When silent mode is enabled, it’s possible to adjust the “touch feedback” level in Settings > Sound & vibration > Vibration & haptics.
  • Android 13 supports dragging app icons to swap the positions of apps in split-screen view. This does not work if the activities that are being swapped are the same (ie. they’re multi-instance).
  • Android’s Battery Saver feature, which limits background activity and tweaks other settings to preserve battery life, can be turned on automatically when the battery level reaches a user-defined percentage. In previous versions, the minimum battery level that could be set by the user was 5%, but in Android 13, that minimum has been raised to 10%.
  • The toggle for app hibernation has been renamed. It was previously called “remove permissions and free up space” but is now “pause app activity if unused.”
  • The option to “enable Gabeldorsche”, Android’s next-generation Bluetooth stack, has been removed from developer options.
  • A setting to “allow mock modem” has been added. This is used to run the mock modem service for instrumentation testing.
  • Android 13’s Bluetooth stack has introduced support for the new Bluetooth Low Energy Audio standard, and on devices with chipsets supporting it, system engineers can toggle LE audio hardware offload in developer options.
  • Settings has added a new x-axis transition animation that can be seen here.
  • The taskbar’s app drawer icon now follows the system theme.
  • The long-press context menu of the taskbar now displays the split-screen shortcut which was previously only available from an app’s context menu on the home screen or app drawer.

What’s new for Android Automotive?


What’s new for Android TV?

Google released Android 13 Beta 1 for the ADT-3 developer kit on May 4, 2022. The new system image includes Google TV applications but very few user-facing changes compared to the previous Android 12-based image.

Android TV 13 Beta about screen

Android 13 Beta 2 for Android TV was released on May 11, 2022, but system images were only provided for the Android Emulator. The emulator image supports new Android 13 features like HDMI state changes and expanded picture-in-picture mode.

Expanded picture-in-picture mode

Picture-in-picture (PiP) mode was first introduced in Android 7.0 for Android TV devices before expanding to all other device types in Android 8.0. PiP is a multi-window mode that enables watching a video in a small window that overlays other content on screen. As of Android 12, PiP windows can be moved around, stashed to the side, or resized, though resizing is limited by aspect ratio. By default, the aspect ratio of a PiP window is 1.777778:1 (16:9) to match most video content, but developers can set a custom aspect ratio between 1:2.39 to 2.39:1. In Android 13, however, developers can create PiP windows that are even longer or wider than before.

On devices that support Android 13’s new expanded picture-in-picture multi-window mode, defined by the system feature ‘android.software.expanded_picture_in_picture’, developers can set the aspect ratio of a PiP window to be less than 1:2.39 or greater than 2.39:1. This new expanded PiP multi-window mode is intended for Android TV devices and its logic can be found in the classes under com/android/wm/shell/pip/tv in SystemUI. According to this code, expanded PiP windows can be moved around on screen via DPAD key events.

The setPreferDockBigOverlays API determines how the expanded PiP window is displayed on screen. This API “specifies a preference to dock big overlays like the expanded picture-in-picture on TV.” Docking “puts the big overlay side-by-side” the activity that specifies this preference so “both windows are fully visible to the user.” In docked mode, the PiP window is docked “on one of the screen edges”, while the fullscreen app “is resized to occupy all the space next to it”.

Docked mode is the default behavior when an app enters expanded PiP mode, though how the two apps are displayed side-by-side depends on if the activity of the fullscreen app is resizable. If it isn’t, then it’s scaled down using size compatibility mode, and the system will apply borders around the window to maintain the activity’s aspect ratio. 

If docked mode is disabled through the setPreferDockBigOverlays API, then the expanded PiP window will be overlaid on top of the fullscreen app, which is the normal PiP behavior.

HDMI state changes are surfaced to the MediaSession lifecycle

Changes in the state of a device connected via HDMI are now surfaced to the MediaSession lifecycle. Google says that if developers handle these events accurately, then playback should stop if an HDMI device is turned off.

Keep clear APIs

Since PiP windows may overlay important UI elements, Android 13 adds the ability to mark UI elements that shouldn’t be overlaid. This new capability, called keep clear, doesn’t guarantee that those UI elements won’t be overlaid, but the system will attempt to abide by the app’s request nonetheless. Google warns that the system may not honor the app’s keep clear requests if too many UI components are marked as keep clear. If an app has large UI components that shouldn’t be overlaid, Google recommends supporting docked mode for apps running in expanded PiP mode.

Developers can mark views as keep clear using the android:preferKeepClear attribute in XML layouts. The setPreferKeepClear API can also be used to programmatically mark a view as keep clear. If the entire view doesn’t need to be marked as keep clear, the setPreferKeepClearRects API can be used to specify regions of the view that shouldn’t be overlaid.

Keyboard layouts API

The new getKeyCodeForKeyLocation API can be used to determine the layout of a connected keyboard. It returns the key code produced by a given location on a reference QWERTY keyboard. For example, if the input is set to KeyEvent#KEYCODE_B and the value returned is KeyEvent#KEYCODE_B, then the current keyboard layout must be QWERTY. If, however, the input is set to KeyEvent#KEYCODE_Q and the API returns KeyEvent#KEYCODE_A, then the keyboard layout is the French AZERTY because the location of the “Q” key on a QWERTY keyboard corresponds to the location of the “A” key on a French AZERTY keyboard.

Low power standby mode

Android 13 adds a new “low power standby” mode that places restrictions on apps while the device is in standby. While low power standby is active, wakelocks are disabled and network access is blocked. These restrictions are lifted temporarily during doze maintenance windows.

This feature is intended for Android TV devices and is disabled by default on other configurations. Low power standby cannot be enabled unless the framework value ‘config_lowPowerStandbySupported’ is set to true. If supported, it can then be enabled by default by setting the framework config ‘config_lowPowerStandbyEnabledByDefault’ to true or toggled via Settings.Global.low_power_standby_enabled.

This change is likely designed to better meet the EU’s energy saving requirements.

Picture-in-picture mode support comes to Google TV

According to Google, Android 13 brings support for picture-in-picture mode to Google TV. “While PiP support was introduced in Android 8.0 (API level 26), it was not widely supported on Android TV, and not supported at all on Google TV prior to Android 13,” reads the documentation.


Conclusion

At Esper, we support Android on a variety of form factors, from handhelds to large screen devices like tablets and POS terminals. Although the release of Android 13 is several months away, we’ll be diligently monitoring new releases to see what new features, behavior changes, and APIs that users, developers, and more importantly, enterprises, need to be aware of. Because Android is a rapidly evolving operating system, it’s easy to fall behind the latest developments. Let Esper manage the software that runs on your device fleet; we care about the nitty-gritty implementation details so you don’t have to.


Developer Preview & Beta changelogs

As mentioned earlier, Google plans to release 2 developer preview and 4 beta builds of Android 13 prior to the initial stable release in Q3 2022. This article documents all of the changes introduced in Android 13 and does not distinguish between the versions they were introduced. However, for historical purposes, this section will list all of the changes introduced in each developer preview and beta build. Content creators and journalists are welcome to use this section as a historical reference.

What’s new in Android 13 Developer Preview 1?

Android 13 Developer Preview 1 was released on February 10, 2022. According to Google’s announcement, the first Developer Preview came with the following features, as well as some changes mentioned in the developer docs but not in the blog post:

Following the release of Developer Preview 1, we discovered the following hidden or undocumented changes:

What’s new in Android 13 Developer Preview 2?

Android 13 Developer Preview 2 was released on March 17, 2022. According to Google’s announcement, the second Developer Preview introduced the following features:

Of course, the second Developer Preview was also full of many hidden or undocumented changes, as well as some changes mentioned in the developer docs but not in the blog post. These include:

What’s new in Android 13 Beta 1?

Android 13 Beta 1 was released on April 26, 2022. According to Google’s announcement, the first beta introduced the following features:

Of course, the first Beta is also full of many hidden or undocumented changes, as well as some changes mentioned in the developer docs but not in the blog post. These include:

What’s new in Android 13 Beta 2?

Android 13 Beta 2 was released on May 11, 2022. According to Google’s announcement, the second beta introduced the following features (that we previously have not covered):

As always, the second Beta release is chock-full of hidden or undocumented changes, including many that are mentioned in the developer docs but not in the official blog post. These are:

At Google I/O, Google also released the second Android 13 beta for Android TV. The company also finally documented some of the new features coming in the Android 13 update for TVs. These include the following:

The low power standby feature we previously discovered was not brought up in Google’s documentation.


Article changelog

This article is updated very frequently to add new information or correct existing content. As such, we maintain a changelog for readers to quickly see what information has been added since their previous visit. This changelog will not be comprehensive, however, but will instead summarize the changes that are made.