AI and UI Convergence: A Deep Dive into Unity UI Integration- PART 1- Splash Screen Setup
In the upcoming series of articles, I aim to delve into the intricate relationship between artificial intelligence (AI) and user interface (UI) Integration. My tool of choice for these endeavors has been, and will continue to be, the Unity Engine. This journey goes beyond mere tool utilization; it’s an exhibition of Unity’s revolutionary capabilities in creating dynamic and efficient UIs. Recognizing the depth of this subject, I will unfold my findings across multiple parts, providing a thorough exploration of how AI and Unity can collectively transform UI Integration within the realm of game development. Despite my reliance on these advanced tools, I harbor reservations about AI’s efficacy in this context, questioning whether it will truly enhance the UI Integration process.
Initiating the Interface Journey: Selecting a Complex UI Mockup for Unity
To kickstart my project, I sought out a sophisticated UI mockup that could serve as a foundation for my work. After scouring through various options, I discovered an intricate design on Figma that resonated with my vision. A special acknowledgment to the creator, Paddy Thibau, for this mockup .
To embark on this journey, we’ll need a few key components: a UX Prototype, a UX Flowgraph, and a UI Mockup — the latter housing all the necessary assets. In the absence of a pre-existing UI Flowgraph, I’ve taken the initiative to draft a preliminary graph. This foundational step is crucial as it will greatly facilitate the seamless transitions between screens throughout our design process.
Let’s begin by deconstructing the UI Mockup into individual UI Components, starting with the Splash Screen. This methodical approach ensures that each element is meticulously integrated, laying a solid foundation for our user interface.
So the Components are
- Background
- Game Logo
- Game Name
- Season Tag
- CTA
Let’s begin by identifying the elements that can be transformed into prefabs for future use. It appears that we have a ‘season tag label’ that could be converted into a prefab and reused. Additionally, the ‘UI Action Icon’ could also be turned into a prefab, eliminating the need to recreate it repeatedly..
So to begin with I created a scriptable object that will contain all the Sprite that is to be used in the game.
using UnityEngine;
[CreateAssetMenu(fileName = "UISpriteData", menuName = "MediumArticle/UISpriteData", order = 1)]
public class UISpriteData : ScriptableObject
{
public Sprite keyboardPrimaryActionImage;
public Sprite xboxPrimaryActionImage;
public Sprite psPrimaryActionImage;
public Sprite keyboardSecondaryActionImage;
public Sprite xboxSecondaryActionImage;
public Sprite psSecondaryActionImage;
public Sprite keyboardTertiaryActionImage;
public Sprite xboxTertiaryActionImage;
public Sprite psTertiaryActionImage;
public Sprite keyboardCancelActionImage;
public Sprite xboxCancelActionImage;
public Sprite psCancelActionImage;
}
This is the current script for the Scriptable Object, where we are temporarily storing the ‘UIActionIcon’. We plan to enhance this as per the requirements in the future.
I have also developed a script for UIActionController
that will be linked to the prefab. This script will adjust the UI Icon according to the action and platform. I have enabled ExecuteInEditMode
to ensure that we can test our implementation in edit mode. However, this will be removed in the final version.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
[ExecuteInEditMode] //to be removed later
public class UIActionIconController : MonoBehaviour
{
[SerializeField]
private Action _selectedAction;
[SerializeField]
private Image _image;
private Action _previousAction;
public Action SelectedAction
{
get { return _selectedAction; }
set { _selectedAction = value; }
}
private InputDevice _selectedInputDevice;
private InputDevice _previousInputDevice;
public InputDevice SelectedInputDevice
{
get { return _selectedInputDevice; }
set { _selectedInputDevice = value; }
}
public enum Action
{
Primary,
Secondary,
Tertiary,
Cancel
}
public enum InputDevice
{
XboxController,
PSController,
Keyboard
}
void Start()
{
_image = this.GetComponent<UnityEngine.UI.Image>();
UpdateUIIcon();
}
void Update()
{
if (_selectedAction != _previousAction || _selectedInputDevice != _previousInputDevice)
{
UpdateUIIcon();
_previousAction = _selectedAction;
_previousInputDevice = _selectedInputDevice;
}
}
public void UpdateUIIcon()
{
Sprite actionSprite = UIManager.Instance.GetSprite(SelectedAction, SelectedInputDevice);
_image.sprite = actionSprite;
}
}
In conclusion, I am utilizing a UIManager
script to manage all the UI elements and screens within the scene. I am retrieving the icons from a scriptable object and employing the GetSprite
function to return the appropriate sprite icon, which is determined by the platform and action.
using UnityEngine;
public class UIManager : MonoBehaviour
{
public static UIManager Instance { get; private set; }
public UISpriteData spriteData;
public GameObject SplashScreen;
private void Awake()
{
if (Instance == null)
{
Instance = this;
DontDestroyOnLoad(gameObject);
}
else
{
Destroy(gameObject);
}
}
public Sprite GetSprite(UIActionIconController.Action action, UIActionIconController.InputDevice inputDevice)
{
switch (inputDevice)
{
case UIActionIconController.InputDevice.Keyboard:
switch (action)
{
case UIActionIconController.Action.Primary:
return spriteData.keyboardPrimaryActionImage;
case UIActionIconController.Action.Secondary:
return spriteData.keyboardSecondaryActionImage;
case UIActionIconController.Action.Tertiary:
return spriteData.keyboardTertiaryActionImage;
case UIActionIconController.Action.Cancel:
return spriteData.keyboardCancelActionImage;
default:
return null;
}
case UIActionIconController.InputDevice.XboxController:
switch (action)
{
case UIActionIconController.Action.Primary:
return spriteData.xboxPrimaryActionImage;
case UIActionIconController.Action.Secondary:
return spriteData.xboxSecondaryActionImage;
case UIActionIconController.Action.Tertiary:
return spriteData.xboxTertiaryActionImage;
case UIActionIconController.Action.Cancel:
return spriteData.xboxCancelActionImage;
default:
return null;
}
case UIActionIconController.InputDevice.PSController:
switch (action)
{
case UIActionIconController.Action.Primary:
return spriteData.psPrimaryActionImage;
case UIActionIconController.Action.Secondary:
return spriteData.psSecondaryActionImage;
case UIActionIconController.Action.Tertiary:
return spriteData.psTertiaryActionImage;
case UIActionIconController.Action.Cancel:
return spriteData.psCancelActionImage;
default:
return null;
}
default:
return null;
}
}
}
The final prefab is expected to appear as above.
Now, it’s time to gather the icons. I’ve managed to find icons for Xbox and PlayStation, but I’m still in need of an icon for the keyboard. So, I’ve decided to use Microsoft Copilot for this task. Lets see how it goes…
Well! AI never fails to disappoint me! So I decided to create my own icons. Human 1–0 AI.
We’re now going to create a panel that includes text. To accomplish this, we’ll utilize a Content Size Fitter and a Horizontal Layout Group, which will allow us to dynamically adjust the size and padding of the panel. Additionally, we’ll employ a Content Size Fitter for the text as well, enabling it to automatically resize based on the amount of text.
In addition, I’ve incorporated a script to modify the text. In the future, I plan to include options for selecting the font and size, ensuring our adherence to the design system.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using TMPro;
[ExecuteInEditMode] //to be removed later
public class UIText : MonoBehaviour
{
[SerializeField]
private string _text;
private TextMeshProUGUI _textUI;
void Start()
{
_textUI=this.gameObject.GetComponentInChildren<TextMeshProUGUI>();
_textUI.text= _text;
}
}
Following the layout adjustments, we successfully created the splash screen. In the upcoming part, we’ll delve into UI animations and screen transitions. We’re planning to leverage AI assistance for generating the UI animation code. Stay tuned to see how it unfolds! 😊