Embedding UaaL in a SwiftUI App: Managing the View Lifecycle

Hello! We’re the iOS team at AnotherBall.

In a previous post, our mobile team introduced the multi-repository architecture behind Avvy — covering how Kotlin Multiplatform (KMP) and Unity as a Library (UaaL) are built, distributed, and integrated across five repositories.

What makes Avvy technically interesting is how naturally UaaL and native code work together. In this post, we’ll zoom into the iOS side and explain how we manage UaaL views within a SwiftUI app.

Background: The Roles of Unity and Native

A core design principle in Avvy is that Unity is responsible only for 2D avatar functionality. Unity handles avatar rendering and the avatar customization (dress-up) UI. Everything else — all other features and UI — is implemented natively, so we can take full advantage of native capabilities and deliver an experience that feels like a proper streaming app.

UaaL’s Constraint: Only One Instance at a Time

There’s a major constraint when working with UaaL: loading more than one instance of the Unity runtime isn’t supported, so only one UaaL view can be displayed on screen at a time. If you try to display two simultaneously, one of them won’t render.

In Avvy, we show avatars across multiple screens — the streaming view, avatar home, gacha, and more. This means we need to swap the UaaL view between screens on every navigation. But having each screen manage this lifecycle individually is cumbersome and increases the risk of unexpected bugs.

To solve this, we created a dedicated SwiftUI component called UnityView that centralizes this management, allowing each screen to use it just like any other view.

Anatomy of the Streaming Screen

As an example, let’s look at the streaming screen. Unity sits at the bottom layer handling only avatar rendering, with native UI overlaid on top.

The gray area is Unity’s avatar rendering region, and the yellow areas are native overlays. In Avvy’s iOS app, we call the Unity region UnityView. From SwiftUI’s perspective, it works just like any other view:

1
2
3
4
5
6
7
8
9
10
UnityView(displayType: .liveStream) // Specify which Unity scene to load
.frame(maxWidth: .infinity, maxHeight: .infinity)
.ignoresSafeArea() // Render edge-to-edge including safe areas
.overlay {
if viewModel.isSceneLoading {
LoadingOverlay() // Loading indicator
} else {
overlayContent // Native buttons, comment list, etc.
}
}

Developers working on each feature screen don’t need to think about Unity’s lifecycle at all — just open and close the screen, and the avatar display toggles automatically.

Implementing UnityView

UnityView is a UIViewControllerRepresentable that wraps a UnityViewController internally. We need a UIKit view controller because the rendering view provided by UnityFramework is a UIKit UIView.

For example, when presenting the streaming screen as a modal from the avatar home screen, the Unity view needs to automatically move to the frontmost screen. UnityViewController achieves this using the viewWillAppear/viewWillDisappear lifecycle:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
public struct UnityView: UIViewControllerRepresentable {
public let displayType: DisplayType

public func makeUIViewController(context: Context) -> UnityViewController {
return UnityViewController(displayType: displayType)
}
}

public final class UnityViewController: UIViewController {
// The Unity view provided by UaaL. Simplified for this article.
// It's a singleton, so the same instance is reused across all screens.
private var unityView: UIView = UnityFramework.shared.rootView

public override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
addUnityView() // Add the view
}

public override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
removeUnityView() // Remove the view
}

private func addUnityView() {
view.insertSubview(unityView, at: 0) // Add Unity's view at the bottom layer
unityView.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
unityView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
unityView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
unityView.topAnchor.constraint(equalTo: view.topAnchor),
unityView.bottomAnchor.constraint(equalTo: view.bottomAnchor),
])
}

private func removeUnityView() {
unityView.removeFromSuperview()
}
}

When the screen appears (viewWillAppear), we add the Unity view; when it disappears (viewWillDisappear), we remove it. It’s an extremely simple implementation, but this alone is enough to guarantee that only one Unity UIView exists on screen at any given time.

Additionally, alongside the view swapping, the Avvy app also pauses and resumes Unity to reduce battery consumption and resource usage.

Wrap-up

When embedding UaaL in a native app, there’s a constraint that only one view can be displayed at a time. By using UIViewController‘s lifecycle to automate view swapping and resource management, and wrapping it as a SwiftUI UnityView, we made it possible to display avatars without worrying about any of that. This architecture allows Avvy to maintain a native app experience while being an avatar-centric app.

In the next post, we’d like to cover how we use the DisplayType introduced in this article to load specific scenes in Unity, and more broadly, how native and Unity communicate with each other.

We’re Hiring

At AnotherBall, we care about building testable, maintainable architecture — and we’re always looking for engineers who share that mindset. If this kind of work excites you, we’d love to talk!

AnotherBall Careers

Using AI Coding to Detect Team Stagnation: Building a Custom Chrome Extension Kanban

Hi, I’m @fortkle, a Tech Lead and Head of Engineering for the Avvy team at AnotherBall.

In this post, I’ll share how I built a custom Chrome extension that overlays a kanban view on top of Linear — making it easier to spot stagnation and priority misalignment in our Scrum workflow.

The Physical Whiteboard Kanban That Just Worked

Our Avvy development team uses Scrum, with a kanban board at the center of our daily transparency and inspection process.

When I think about kanban, what still comes to mind is the physical whiteboard and sticky notes we used at a previous job.

The setup was simple:

  • Draw lanes on the whiteboard with a marker (Todo / In Progress / In Review / Done)
  • Arrange cards from top to bottom in priority order
  • Place Issues (parent) on the left, with Sub-issues (children) expanding to the right

That alone made it clear what we were working on and in what order across the sprint.

We started with these simple rules, but our kanban evolved as we progressed through sprints.

For example, we printed members’ Slack icons and attached them to magnets to place on cards as assignment indicators. We also wrote WIP limits directly on the whiteboard with markers.

The Kanban Guide states that “how flow transparency is achieved is limited only by the imagination of the Kanban system members,” and the physical whiteboard was exactly that kind of tool.

If you’re interested in creative uses of physical kanban, The Agile Coach’s Toolbox – Visualization Examples is a great read.

Moving to Linear — Recreating “That Feeling”

Our current Avvy team includes remote members working from different regions and countries, so a physical whiteboard isn’t practical. We use Linear for our kanban instead.

Linear has a Board View for card management, but after using it for a while, I noticed several gaps from the whiteboard experience.

Example of grouping cards by assignee (all data shown is fictional)

  • The Issue/Sub-issue parent-child structure can’t be laid out ideally on the board (possible with grouping/sub-grouping, but not ideal)
  • The default UI limits how much you can see in one screen, making it hard to grasp the full sprint
  • Stagnating cards aren’t easy to spot at a glance
  • WIP limits can’t be enforced visually

We kept feeling like, “It works, but it doesn’t have that feeling.”

Overlaying a Custom View with a Chrome Extension

The solution I tried was to overlay a custom view on top of Linear using a Chrome extension.

Since the Linear API provides Issue and Sub-issue data along with status change history, I used that to recreate the whiteboard-era layout in the browser.

Custom view recreating the physical kanban (all data shown is fictional)

Key features include:

Expanded Issue/Sub-issue parent-child structure
Sub-issues expand from their parent Issue card on the board, making the full picture easy to grasp. Being able to see the entire sprint in one screen is a major plus.

Vertical ordering by priority
Tasks within the sprint are displayed from top to bottom in priority order. As the sprint progresses, cards move toward the upper right in a diagonal pattern — making it easy to notice when lower-priority cards are moving ahead of higher-priority ones.

Handling unexpected tasks
With priority made visible, when an unexpected task comes in, it’s easier to decide: “Let’s drop this lower-priority item and fit in the new task.”

AI Coding Makes “I Just Want to Tweak This” Instant

Building a custom view like this was technically possible before, but AI coding takes it to another level. The key point is that with AI coding, small changes can be made in minutes. Here are a few examples of improvements I made.

Stagnation Indicator

There was a problem: “Even if a member is stuck in In Progress, it’s hard to notice just by looking at the board.” With the physical kanban, we’d draw tally marks on sticky notes to visualize elapsed days.

I told the AI: “I want to visualize elapsed time in each lane. Add an indicator like a 5-segment gauge on the assignee’s avatar — fill one segment per day, up to a maximum of 5 days of stagnation.” The UI came together in just a few minutes.

A 5-segment ring indicator appears on the assignee’s avatar in the upper right of each card, filling one segment per day since the card moved to In Progress. Now it’s easy to see at a glance when a card has been stuck for several days.

Since the Linear API provides timestamps for status changes, this was all achievable on the client side — no server-side additions needed.

Filtering and Card Focus

During our Daily Scrums, each member shares what they’re working on that day. But it wasn’t always clear at a glance where each person’s cards were on the board. With the physical kanban, you could just point at the card directly.

I told the AI — along with a screenshot — “When filtering by assignee, I sometimes lose track of where their cards are. Only when filtering is active, add ←→ buttons in the attached screenshot’s position to focus on that person’s cards one by one — like a browser’s in-page search (Ctrl+F).” The result looked like this:

With minor UI tweaks, the implementation matched my vision almost immediately. Also, instead of hiding non-selected cards (the typical filter behavior), I chose to highlight the selected ones. That way, you can still see the context of other tasks even while filtering — which turned out to be very useful.

How the Team Changed After We Started Using It

Here are some changes we saw after adopting this board:

  • “This task seems higher priority than what we’re currently working on!” conversations increased
    • When tasks are ordered by priority, misalignment becomes easy to spot.
  • We started catching stagnation earlier
    • Cards stuck in In Review for several days became more visible, leading to earlier “I can help if you’re stuck” offers.
  • Making trade-offs within a sprint became easier
    • When unexpected tasks came in, reaching consensus on “let’s drop this one this sprint” got smoother.

On the downside, since this is a Chrome extension, it’s not accessible from smartphones or dedicated desktop apps. Also, even for internal-only distribution, the Chrome Web Store review process takes 2–3 days, which slowed down our rollout to the team.

We’ve only just started using it, but this made me realize the potential of building ideal kanban tools with AI.

Distributing the Chrome Extension Within the Team

You can publish a Chrome extension to the Chrome Web Store while restricting access to company members only. This article was helpful for the specific steps:

Distribute to Google Group members - Google

We also automated the submission process using GitHub Actions, so merging a PR automatically triggers a review submission.

Wrap-up

I had half given up on the idea that “the flexibility of physical kanban can’t be replicated in digital tools” — but AI coding has changed that for us.

Describe what you want to visualize in plain language, and a working UI appears in minutes. Free from the constraints of existing tools, teams can now build the visualizations they actually need.

We’ll keep evolving our kanban as our team grows and its needs change.

We’re Hiring

At AnotherBall, we believe in giving teams the freedom to identify and solve their own problems — just like the kanban we built here. If that kind of culture sounds appealing, we’d love to talk.

AnotherBall Careers

From Zendesk to Chatwoot ── How We Rebuilt Our CS Flow Around AI

Hello! I’m Francis, in charge of AIOps at AnotherBall, where I work on applying AI to internal operations. This post is about how we rebuilt our CS flow around AI — bringing first reply time from 1,186 minutes down to under a minute and ending up with more insight into our users than ever.

Moving to Chatwoot

We were originally using Zendesk for customer support. Users would submit a form from the “Contact Us” section in the app, and CS would handle it through Zendesk’s ticket interface. We wanted to use AI to improve CS efficiency, but the more we invested in AI-assisted responses, the more we ran up against the platform’s limitations. Zendesk’s architecture wasn’t really built for a high level of AI customization — customizing the response flow required paid add-ons and workarounds. We wanted granular control over tones, template usage, and escalation rules per inquiry category, but Zendesk made that difficult.

Chatwoot was able to give us this control — it let us plug our own automation layer into the workflow while providing fine control over the AI responses. We’re also keeping operating costs low with the hosted plan, and we have the option to move to the open-source/self-hosted edition if we ever need to. Having an open-source foundation also makes the platform’s features overall much more transparent and easy to troubleshoot.

The Problem: Chatwoot is Chat-First, We’re Form-First

Chatwoot is a chat-based CS tool, but when we were using Zendesk, we handled inquiries through forms. Most issues don’t require real-time support, and we actually want users to include as much detail as possible in their first message. Chat makes it too easy to fire off incomplete messages, leading to unnecessary back-and-forth, so we didn’t want to switch our inquiry flow to chat.

To respond to inquiries, we also needed information such as the user’s device model, OS version, and app version. Our existing form lets us pre-populate device information automatically when users submit from within the app, so they don’t have to type it themselves. Doing that reliably with a chat widget wasn’t straightforward, so I needed a way to keep the form-based intake while still converting each inquiry into a native Chatwoot conversation.

The Solution: Google Forms + Sheets + Apps Script

I went with a stack I could fully control:

  1. Google Forms — the user-facing inquiry form
  2. Google Spreadsheet — form responses land here automatically
  3. Google Apps Script — handles connection to Chatwoot and Slack

When a user submits the form, an Apps Script trigger formats the submission (including all the fields) and sends it as an email to our Google support inbox. Chatwoot is connected to that mailbox, so each email is automatically ingested as a new conversation with the full context preserved. Once the conversation exists, I use Chatwoot’s API to apply the inquiry-category label and set a few contact attributes.

I also have a script that functions as a webhook handler. Chatwoot fires a message_created event for every new message, which triggers the script to send the message to Slack: customer messages open a new thread, agent and AI replies post as threaded responses.

CS inquiry flow: Google Form → Chatwoot → Slack

Every inquiry lands in the Google Sheet, which is the most valuable part of the architecture. With all the raw data there, I built an automated weekly CS digest: every Friday, an LLM classifies the week’s inquiries (including feature requests), and a script then posts a bilingual JP/EN summary to Slack.

As a result, the team reacted quickly, and it’s sparked ideas beyond CS — applying a similar approach to our social media and getting a better sense of how users experience the app. This feels like the beginning of something bigger: not just responding to our community, but actually knowing them.

Prompt Engineering for Chatwoot

Here are a few prompt-engineering choices that made our AI replies more effective:

  • Explicit template structure. I include the exact headers, tone guidelines, and sign-off patterns our human agents use. The model follows them reliably when the structure is explicit.

    1
    2
    3
    4
    5
    6
    Role: You write professional emails on behalf of [App] Support.
    Use a greeting, structured paragraphs, and a courteous closing.
    Insert TWO line breaks between paragraphs.

    Greeting: Address the user by username (e.g. "user_12345").
    If no identifier is available, use a neutral greeting.
  • Inquiry type routing. Different inquiry types get different system prompts. We have the bot identify the inquiry type using the message content and select the appropriate prompt — no manual tagging required.

  • Escalation logic. The model escalates to a human when it’s not confident.

Results

We switched to Chatwoot on Feb 16:

Metric Zendesk avg (Jan 15–Feb 15) Chatwoot avg (Feb 16–Feb 26)
First reply time 1,186 min (19.8 hrs) ~1 min
Resolution time 289.7 hrs (12.1 days) 30 hrs (1.25 days)

Zendesk’s averages are skewed by outlier tickets — the median was 299 min / 172.8 hrs, still well above Chatwoot’s averages. Ten days is a short window though, so I’ll definitely be revisit the data once we have a full month of data.

The ~1 min first reply is the AI auto-responding the moment a conversation is created, regardless of time zone. Human agent follow-ups average 6 hr 46 min.

With everything in Google Sheets, I can easily analyze our data. I can track inquiry volume, spot emerging issues, and run a weekly bilingual CS digest automatically — no dashboard, no analytics add-ons needed.

The setup was a real investment, but I’m in a better position now: a stack I control, AI integration that works, and data I can act on. The broader lesson: own your data and your integration layer. Whether this tradeoff makes sense depends on your team, but for me, wanting to invest heavily in AI and iterate quickly, a more open, controllable stack was the right call.

We’re Hiring

AnotherBall sits at the intersection of entertainment and technology. We build products that connect people to the content and communities they love — and we use AI across the full stack to do it better and faster.

If that sounds like the kind of environment you want to be part of, we’d love to hear from you.

AnotherBall Careers

Mobile App Development with KMP × Unity UaaL ── Multi-Repo Setup and Automation

Hi! We’re the AnotherBall Mobile Engineering Team.

Our app “Avvy” has a somewhat complex architecture: we use KMP (Kotlin Multiplatform) to share business logic across iOS and Android, and Unity as a Library (UaaL) to embed Unity’s 2D avatar rendering into our native apps.

In this article, we’ll share how we coordinate five repositories and how much we’ve automated with GitHub Actions.

Repository Structure

To avoid build complexity and inter-team dependencies, we split our codebase by function into separate repositories.

Repository Role Artifacts
shared-kmm Business logic AAR / XCFramework (KMP library)
unity-module 2D avatar rendering UaaL libraries for Android / iOS
android-app Android app APK / AAB
ios-app iOS app IPA
unity-spm SPM distribution for Unity XCFramework Swift Package

Each repository communicates through pre-built libraries—AAR for Android and XCFramework for iOS. This allows each team to work independently.

What the Unity Module Does

The Unity module handles avatar display and real-time control.

  • Avatar display: Renders avatars with 2D animation
  • Face tracking: Detects facial movements via camera and reflects them on the avatar
  • Customization: Outfit and accessory changes

Communication with native apps requires special handling. Face tracking sends data 60 times per second, so standard bridges would cause latency. On iOS, we use direct pointer access; on Android, we use memory-mapped files for fast data exchange.

How We Automated It

We use GitHub Actions to automate most of the cross-repository coordination. About 1,200 PRs are processed automatically each month (December 2025 figures).

Server API Changes

When the server’s API definition file (OpenAPI) is updated, an update PR is automatically created in the KMP repository.

KMP Library Updates

From KMP library release to update PR creation in each app repository—everything is automated.

The flow is simple: Publish → Trigger → Update.

  1. When a release is triggered in shared-kmm, it publishes to GitHub Packages
  2. On success, gh workflow run triggers the update workflow in each app repository
  3. Each app gets an auto-generated PR with the version update
1
2
3
4
5
6
7
8
9
# On KMP release (excerpt)
- name: Publish to GitHub Packages
run: ./gradlew publish

- name: Trigger Android update
run: |
gh workflow run update-kmm-version.yml \
--repo AnotherBall/android-app \
--field version=${{ env.VERSION }}
1
2
3
4
5
6
7
8
9
10
11
# Android app update workflow (excerpt)
- name: Update version in libs.versions.toml
run: |
sed -i "s/kmm = \".*\"/kmm = \"${{ inputs.version }}\"/" \
gradle/libs.versions.toml

- name: Create Pull Request
uses: peter-evans/create-pull-request@v5
with:
title: "Update KMP to ${{ inputs.version }}"
branch: "auto/kmm-${{ inputs.version }}"

Unity Library Distribution

To use Unity modules in the iOS app, we distribute them via SPM (Swift Package Manager).

  1. Build the Unity XCFramework and upload it to GitHub Releases
  2. Calculate a hash value for the file
  3. Auto-generate Package.swift (embedding the download URL and hash)
  4. Create a PR in the distribution repository (unity-spm)

While we could distribute directly from the unity-module repository, Xcode downloads the entire repository when resolving SPM packages. By creating a separate unity-spm repository that contains only Package.swift, we significantly speed up the download process.

The hash value lets the iOS app verify the file wasn’t corrupted during download.

Auto-Generated Release Branch PRs

When changes are pushed to a release/* branch, multiple merge PRs are automatically created:

  • release/2.10.0main (for production release)
  • release/2.10.0release/2.11.0 (to propagate bug fixes to the next version)

Version numbers are compared to determine the appropriate merge targets, preventing missed merges.

Package.resolved Conflict Resolution (iOS)

When multiple KMM/UaaL update PRs exist simultaneously, Package.resolved file conflicts occur. In the iOS repository, we have a workflow that automatically resolves these conflicts.

Trigger: Push to release/* branch

How it works:

  1. Fetch open PRs targeting the release branch via GitHub API
  2. Filter PRs with titles starting with chore: update KMM or chore: update UaaL
  3. Check if each PR can be merged, and identify those with conflicts
  4. Resolve conflicts for each PR
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Attempt to merge base branch
if git merge "origin/$BASE_REF" --no-edit; then
echo "Merge succeeded"
else
# On conflict: temporarily adopt PR's Package.resolved
git checkout --ours Package.resolved
git add Package.resolved
git merge --continue
fi

# Re-resolve SPM dependencies (incorporates base branch changes)
make resolve-package-dependencies

git commit -m "chore: resolve Package.resolved conflict"
git push

The key is make resolve-package-dependencies (which runs xcodebuild -resolvePackageDependencies internally), re-resolving dependencies including target branch changes. When multiple PRs have conflicts, they’re processed in parallel.

Remaining Challenges

  • CI/CD execution time: Gradle/Xcodebuild can take 40+ minutes; we’re looking into better caching strategies
  • Workflow duplication: Similar logic exists in multiple workflow files; we want to extract it into reusable components
  • Auto-merging update PRs: Currently we only auto-create PRs; we’d like to auto-merge when tests pass

Conclusion

Even with a complex setup combining KMP and Unity UaaL, separating repositories and communicating through artifacts lets each team work independently. We’ve learned that automation isn’t a one-time setup—it requires ongoing improvement.

We’re Hiring

AnotherBall is looking for mobile engineers interested in app development using Kotlin/Swift/Unity/AI!
We’re seeking teammates to grow our product together while adopting new technologies like KMP. If you’re interested, we’d love to hear from you!

Kotlin Fest 2025 Event Report ── AI, KMP, and Developer Experience Frontlines

Hi there!
We’re RIO (@rioX432) and Apippo (@A5th_Faris) from the AnotherBall Mobile Team.

We work on Android/iOS development for Avvy using KMM and Compose Multiplatform. Since we actively use AI for native code development every day, Kotlin Fest 2025 was packed with topics directly relevant to our daily work.

We attended Kotlin Fest 2025 held on November 1, 2025!

This year featured presentations across 3 rooms, recording the highest number of sponsors and talks ever. AnotherBall participated as a Silver Sponsor!

The entire venue was buzzing with excitement—truly a “festival” atmosphere.

What is Kotlin Fest?

Kotlin Fest is Japan’s largest Kotlin community conference, themed around “Celebrating Kotlin,” aimed at sharing knowledge and fostering connections around Kotlin and its ecosystem.

This year’s venue was the Tokyo Conference Center Shinagawa.

The corporate booths and networking spaces after sessions were packed, with conversations buzzing about implementation details and architecture. What stood out most was the clear increase in companies adopting server-side Kotlin.

Beyond Android and KMP, there was a real sense that Kotlin is expanding as a “general-purpose language.”

Session Highlights

Opening Session

“The Role of Kotlin Developers in an Era Where AI Writes Code”
Taro Nagasawa, Atsushi Mori, Eiji Tamaki / Kotlin Fest Organizers

Even in an age where AI writes code, developers remain the “reviewers and decision-makers.”
The session explained why Kotlin is the most suitable language for this era:

  • Simple yet expressive syntax
  • AI-assisted ecosystem through JetBrains IDEs
  • Consistent training data

Not “AI taking our jobs,” but “evolving design together with AI.”
This direction was shared across the entire Kotlin community in the opening.

[Invited Session] The Technology Behind Kotlin: Language Design and Unsung Heroes

Yan Zhulanow / Kotlin Team

The session covered new features in Kotlin 2.3 and the development background of the Build Tools API.

The Build Tools API is particularly relevant for AnotherBall, which uses KMM.
Improvements in per-module incremental builds and KMP support promise significant improvements in build speed and developer experience.

Kotlin’s evolution values “stability and consistency” over “novelty.”

Custom String Interpolation via Kotlin Compiler Plugin

be-hase

A case study on understanding Kotlin Compiler Plugin internals and customizing string interpolation. The code generation mechanism using IR (Intermediate Representation) extension was thoroughly explained. A session that offered perspective from the “extending” side rather than just “using” the language.

At Least Make It Native - Multiplatform and Exit Strategy

RyuNen344

A pragmatic session focused not just on multiplatform adoption, but on how to localize the pain of exit. It covered gradual replacement strategies while considering Swift Export and Kotlin/Native constraints. The philosophy of “don’t adopt all at once, design for clean rollback, especially define clean interfaces” is crucial for hybrid apps like Avvy.

Implementing CPU/GPU “Cooperative” Performance Management in Kotlin

matuyuhi

An explanation of how to safely handle Android Dynamic Performance Framework (ADPF) in Kotlin. Kotlin-like approaches including async data processing with Flow, type-safe design with sealed classes, and DSL-based control. Immediately worth considering for Avvy. Useful not just for thermal management, but also for optimizing face detection inference on low-spec devices.

Inside of Swift Export - A New Bridge Between Kotlin and Swift

giginet

An explanation of how Swift Export enables calling Kotlin code directly from Swift. Without going through the ObjC bridge, the API experience on iOS improves significantly. A meaningful update for both CMP/KMM adoption and exit strategies. While not production-ready yet, we should start preparing for migration from ObjC now.

Rewind & Replay: Kotlin 2.2 Transforms Coroutine Debugging

daasuu

Introduction to debugging improvements in Kotlin 2.2 + IntelliJ IDEA 2025.1. Enhanced local variable retention, stack restoration, and step execution. Definite usability improvements that will gradually boost daily development efficiency.

Building Full Kotlin! MCP Server, AI Agent, and UI All-in-One

Shuzo Takahashi

An introduction to implementing MCP servers, AI agents, and UI entirely in Kotlin using the Kotlin MCP SDK. The presentation included examples of integrating Figma with MCP servers, which was highly inspiring. Worth exploring for Avvy, especially for UI generation assistance and design automation integration.

Getting Started with AI Agent Development Using Koog

hiro

A detailed session on “Koog,” JetBrains’ AI agent construction library. Langfuse integration, DSL construction, parallel execution—enabling serious AI workflow design. This is a topic to tackle immediately.

Shifting to the mindset of “what to make AI do,” we’re thinking about applications from UI code generation assistance to company-wide OKR progress management and PdM decision support.

A Practical Guide to Transforming Legacy Code into Idiomatic Kotlin with AI Collaboration

nishimy432

A presentation on teaching AI “Kotlin-ness” through prompts when converting Java code to Kotlin. The perspective of reflecting team culture in AI—data classes, scope functions, sealed classes, null safety—was impressive. This approach of conveying code style to AI is something AnotherBall can leverage.

Overall Impressions

Kotlin Fest 2025 was structured around three pillars: AI, Multiplatform, and Developer Experience Evolution.

  • Kotlin is becoming the language that drives AI
  • Compose Multiplatform and KMM are becoming “technologies for designing exit strategies”
  • Kotlin development is heading toward “balancing safety and expressiveness”

At AnotherBall, Avvy prioritizes native UIUX optimization while leveraging KMM for shared business logic, along with Unity for FaceTracking—a highly challenging architecture. We’re also actively using AI for native code development, and many of the technologies presented at this conference directly connect to our development work.

We were reminded of the value of an environment where we can tackle the latest architecture with a full Compose setup.

Final Thoughts

Kotlin Fest 2025 was an event that showcased the depth of the Kotlin community. Both speakers and attendees impressed me with their approach of discussing how to use AI and multiplatform “in the real world.” AnotherBall will continue to push forward in the Kotlin × AI space.

Our team at the event

Have a nice Kotlin!

We’re Hiring

AnotherBall is looking for engineers interested in app development using Kotlin/Unity/AI.
We’re seeking teammates to grow our product together while adopting new technologies like KMM and Compose Multiplatform.

AnotherBall Careers

How AI Put Our Company Into BAKUSOKU Mode

Hi, I’m @ramenshim, CTO of AnotherBall.
As we approach the end of the year, I’d like to look back and share how we accelerated our company into “Bakusoku Mode” with AI from April to June.
“Bakusoku” — a Japanese term meaning “explosive speed,” or moving fast and decisively — captures the spirit of how we pushed our organization forward.
Before these initiatives, only a handful of people used AI. A quick Google Form survey revealed:

  • Some teammates were still on the free tier of ChatGPT.
  • Some had never touched any AI tool besides ChatGPT.

That was a bit of a shock. But it was obvious to us that AI would boost productivity. After we ran a set of company-wide initiatives, people started to say:

  • “I now start work with AI in mind.”
  • “I can tell what AI can and cannot do.”
  • “Let’s try it with AI first.”

This mindset shift was especially impactful for non-engineering roles (marketing, HR, back office, etc.). Here’s exactly what we did.

What We Did to “Bakusoku” the Company

1. Set a slogan — Bakusoku AI 10x

Falling behind on AI is a big risk for a speed-driven startup.
To flip the mood fast, we launched the slogan “Bakusoku AI 10x,” which means “take on challenges 10x faster with AI.” (It’s an homage to Yahoo! JAPAN’s famous “Bakusoku Keiei / Explosive-Speed Management.”)

It might sound like “just a slogan,” but stating it clearly sent a strong message: this isn’t half-hearted; it’s a full-company commitment.

2. Create spaces to talk about AI

Alongside the slogan, we built regular spaces to talk about AI inside our routines: internal lightning talks on good use cases, and hands-on workshops for specific tools.

The most effective was AI Mingle. At morning check-ins, we split into groups of 2–3 and each person shared how they used AI the day before in two minutes. The thought of “I want something to share at AI Mingle” (and “I don’t want to show up empty-handed”) helped people build a daily habit of using AI.

This tied tightly to the slogan, too. When someone shared a clever use, teammates could say “That’s Bakusoku!” Having a space where that language gets used made the slogan stick.

We also opened a Slack channel, #random-ai-lab, so even remote members could asynchronously share their day-to-day AI usage.

3. Celebrate AI usage

We wanted to praise AI adoption both top-down and peer-to-peer. So leadership picked one post every day to spotlight and celebrate, cheering on AI usage across the org.

Picked posts. We work bilingually in JA/EN.

At the same time, we introduced AI HEROs: anyone who used AI creatively received a “Bakusoku AI 10x” sticker. Because coworkers could give and receive stickers freely, it combined:

  • The experience of being praised (“That workflow is awesome! Very Bakusoku!”)
  • The fun of collecting stickers, almost like a game

It became a neat motivation loop.

4. Solve real workflows together

Some people naturally took off with AI after these steps, but others — like back office or marketing — still struggled to see how to apply it.

So teammates who were strong with AI paired with them to tackle concrete tasks together.

For example, in the Social Media team, planning, outlining, and post-analysis were very person-dependent. We built a Cursor-based workflow together that ran the whole process from ideation to analysis. This cut planning effort and improved analysis depth. It may look scrappy and unscalable, but once one team succeeds, others copy and teach each other — so it was well worth doing.

How we measured it

It’s hard to quantify “AI adoption,” so we set two quarterly goals:

  1. 100% of members publish something AI-related (an LT or a post to #random-ai-lab).
  2. At least 5 posts per day in #random-ai-lab, continuously.

For me, “company-wide AI adoption” means everyone becomes an originator, not just a consumer — that’s why we set #1. And we needed #2 to make sure organic, spontaneous posts kept happening.

We hit both: 150 posts in 20 business days (7.5 per day).

Qualitatively, awareness shifted, too. By the end of the quarter:

  • People started work assuming AI is part of the process.
  • They gained a feel for what AI can and can’t do.
  • “Let’s try it with AI first” became a natural phrase.

Each team is now improving workflows, and AI usage is evolving from individual hacks to a company culture.

Resources we used

This effort was driven by two people — myself and our org development lead — and was only possible because leadership committed and passionate members stepped up. Leaders became the “first dancers” for AI, created “second dancers” via programs and 1:1 support, and the rest naturally followed.

Tools and budget

Here are some tools we currently subsidize for teammates:

  • Canva
  • ChatGPT (Codex)
  • Claude (Claude Code)
  • Cursor
  • Gemini
  • Grok
  • Manus
  • Notion AI

Some were adopted bottom-up. When someone proposes a tool, we subsidize a small pilot for those who want to try it, then roll it out wider if it works — lowering the barrier to experiment. Budgets are case by case; we don’t pre-commit a big pool. We also avoid annual contracts because models and tools evolve so fast.

Wrap-up

AI adoption doesn’t become culture through policies or tools alone.

  1. Set a company-wide slogan.
  2. Create spaces to talk about AI.
  3. Celebrate AI usage.
  4. Tackle real workflows one by one.

By stacking these every day, we steadily moved toward an org where using AI is normal.

Join us

In this post, I shared the initiatives we implemented in the first half of the year to drive AI adoption. I look forward to introducing what we’ve been working on since the summer on another occasion.
AnotherBall is hiring people who want to leverage the latest tech and take Japanese entertainment culture global. We’re actively recruiting — check out our openings and help us build products that win worldwide.
Thanks for reading — brought to you by @ramenshim!