Tweaking NUI

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Name: New UI (NUI)
Summary: A replacement UI system built on a underlying low-level Canvas
Scope: Engine
Current Goal: Main menu replacement
Phase: Implementation
Curator: Immortius

Work Involved

Canvas (DONE)

Establishing an underlying canvas system that encapsulates the low-level rendering details, insulating the rest of the UI framework from the rendering implementation.

The canvas capabilities are:
  • Sub-regioning. Specifying a sub-region of the screen which will be the target of operations until the region is discarded. Optionally the region will crop anything drawn outside the region (allowing for scroll lists).
  • Text drawing - featuring
    • Color
    • Shadowing
    • Wrapping
    • Alignment
  • Texture drawing - featuring
    • Stretching/scaling/repeating
    • Border support, where the border is drawn at normal size and the contents are stretched/scaled/repeated
    • Drawing sub-regions of textures.
  • Material drawing
  • Mesh drawing
  • Alpha support for applying translucency to regions to allow fade in/out
  • Optionally support for arbitrary transforms
  • Interaction region "drawing"
UI Widget Framework (DONE)

A widget is a UI element. It receives Canvas to draw itself. These will be registered in a similar fashion to components and events, so they can be discovered and referred to via a uri, and be serialized/deserialized from json (or otherwise).

Some basic widgets will be
  • Label
  • TextInput
  • Button
  • Image
  • Slider
UI Layouts Framework (DONE)

UI containers are special UI Widgets that contain other widgets, providing them with layout.

Data Binding Framework (DONE)

In addition to having values directly set, widgets will be designed to allow binding to entities or other sources - that is a label could be bound to "engine:DisplayInformation.name" and linked to an entity, and if the entity has a DisplayInformationComponent, the name field of that component will be used as the text of the label.

UI Manager (Incomplete)

A UI manager will manage the active UI elements - rendering them, sending them inputs and so forth. It will allow screens, HUD elements, and windows to be opened, removed and accessed.

Screens, Hud Elements and Windows (Incomplete)

These are the three top-level UI elements that can be instantiated. Screens fill the entire screen, and exist in a stack. Hud Elements may be positioned to part of the screen, and their positions controlled through config. Windows are movable elements.

Main Menu Replacement (Incomplete)

The main menu needs to be switched over completely to use NUI.

Console Replacement (Unstarted)

The console needs to be replaced with a NUI based console, and made available both ingame and in the menu.

Ingame UI Replacement (Unstarted)

The ingame UI needs to be replaced with NUI based elements.

Extension Points (Unstarted)

There needs to be a system for supporting the introduction of UI elements into existing UI elements by modules.

Other Work Outstanding
  • Cleanup
  • Line drawing
  • Databinding entities
  • Tooltips
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Progress Report

Started work on the canvas
  • Sub-regioning support is done. A sub-region can be used as so:
Code:
try (SubRegion ignored = canvas.subRegion(Rect2i.createFromMinAndMax(100, 100, 200, 200), true)) {
    canvas.drawText(font, "Some text", 200);
}
The boolean argument is whether this is a cropping region - cropping is cumulative, so a sub-region created within another sub-region will be cropped to the overlapping portion of those regions.

Sub-regions are intended to allow UI Widgets to draw to a space of the screen without having to know about where they are being drawn in absolute coords - the container can create a sub-region before drawing them, and this gives them a view of the screen with upper-left coords of (0, 0), which may actually be anywhere on the screen. The cropping allows for implementation of scrollable regions, as they can draw their elements as normal but they will be cropped to the currently visible section of the region.

Additionally each region has its own canvas state (text color, draw offset), so changes made when rendering a region do not flow through to other regions.

  • Most text rendering is done (except alignment, and automatic incrementing of the draw offset). There are four methods for this:
Code:
    /**
    * Draws text. Text may include new lines. This text will always be left-aligned.
    */
    void drawText(Font font, String text);
 
    /**
    * Draws text. Text may include new lines. Additionally new lines will be added to prevent any given line exceeding maxWidth.
    * If an individual word is longer than the maxWidth, it will be split mid-word.
    */
    void drawText(Font font, String text, int maxWidth);
 
    /**
    * Draws text with a shadow. Text may include new lines. This text will always be left-aligned.
    */
    void drawTextShadowed(Font font, String text, Color shadowColor);
 
    /**
    * Draws text with a shadow. Text may include new lines. Additionally new lines will be added to prevent any given line exceeding maxWidth.
    * If an individual word is longer than the maxWidth, it will be split mid-word.
    */
    void drawTextShadowed(Font font, String text, int maxWidth, Color shadowColor);
Text is drawn at the current drawing offset, so:

Code:
canvas.setOffset(100, 100);
canvas.setTextColor(Color.WHITE);
canvas.drawText(Assets.getFont("engine:default"), "Some text");
will draw "Some text" with its top-left corner at (100, 100) of the current region. A width can be specified, in which case the text is split into lines as it needs to.

For performance, the text rendering no longer uses the old method of drawing individual characters with opengl commands. Instead a mesh is built and then rendered using a simple font material. This mesh is cached over multiple frames, until it is no longer being used, and then it is discarded. This is all invisible to the user.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Making progress on the texture drawing:



Code:
        canvas.drawTexture(Assets.getTexture("engine:testWindowBorder"), Rect2i.createFromMinAndSize(0, 0, 128, 128), ScaleMode.STRETCH);
        canvas.drawTexture(Assets.getTexture("engine:loadingBackground"), Rect2i.createFromMinAndSize(12, 12, 104, 104), ScaleMode.STRETCH);
        canvas.setOffset(15, 100);
        canvas.drawTextShadowed(font, "Stretched", Color.BLACK);
 
        canvas.drawTexture(Assets.getTexture("engine:testWindowBorder"), Rect2i.createFromMinAndSize(128, 0, 128, 128), ScaleMode.STRETCH);
        canvas.drawTexture(Assets.getTexture("engine:loadingBackground"), Rect2i.createFromMinAndSize(140, 12, 104, 104), ScaleMode.SCALE_FIT);
        canvas.setOffset(143, 75);
        canvas.drawTextShadowed(font, "Scaled Fit", Color.BLACK);
 
        canvas.drawTexture(Assets.getTexture("engine:testWindowBorder"), Rect2i.createFromMinAndSize(256, 0, 128, 128), ScaleMode.STRETCH);
        try (SubRegion ignored = canvas.subRegion(Rect2i.createFromMinAndSize(268, 12, 104, 104), true)) {
            canvas.drawTexture(Assets.getTexture("engine:loadingBackground"), Rect2i.createFromMinAndSize(0, 0, canvas.size().x, canvas.size().y), ScaleMode.SCALE_FILL);
            canvas.setOffset(0, 88);
            canvas.drawTextShadowed(font, "Scaled Fill", Color.BLACK);
        }
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI


Newly implemented:
  • Drawing "Bordered" textures. This allows you to have a small texture, say 32x32, and produce a larger area without stretching it. This allows for window borders, buttons, and other elements that can be different sizes from a single source texture.
  • Drawing arbitrary materials, allowing shaders to be used to produce whatever effect is desired. This could potentially allow for fun effects like camera views, by rendering the world to a texture and linking that to a material to post-process it.
  • Drawing mesh to an area. This can be done using a texture, in which case a special material is used to light the mesh with two directional lights. Or a material can be provided, although in that case the mesh will be unlit. begla, any thoughts on this - is there a generic way we can light up mesh rendered with materials which are intended for deferred rendering?
The money head rotates btw.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Starting to look at "drawing" interaction regions in the canvas.

This is the more novel end of the canvas system, in that I haven't encountered it being done before. Unity has a vaguely similar system where you draw a gui element providing previous state and receiving new state, but it works out differently and has some issues.

The API I'm thinking of so far is you draw an interaction region like this:

Code:
Rect2i region = Rect2i.createWithMinMax(....);
InteractionListener listener = new InteractionListener() { ... };
canvas.addInteractionRegion(region, listener);
Generally the listener should exist across frames, as it is used to identify an area for effects like the mouse leaving the interaction region. The events that the listener will receive are

Code:
    void onMouseOver(Vector2i pos, boolean topMostElement);
    void onMouseLeave();
    boolean onMouseClick(int button, Vector2i pos);
    void onMouseDrag(int button, Vector2i pos);
    void onMouseRelease(int button, Vector2i pos);
    boolean onMouseWheeled(int amount, Vector2i pos);
Basically anything related to mouse position. The position provided is relative to the region, which is important for reasons I'll touch on in a second.

There are a number of reason for drawing interaction reasons rather than handling them outside of the canvas:
  • It allows interaction ui elements to know nothing of their overall position - all they need to know is they have some space to draw in.
  • It allows interaction to closely align with actual drawn elements - if an element is cropped, the interaction region will likewise be cropped.
This last point has some big ramifications in how the canvas system could eventually be leveraged, specifically:
  • Arbitrary transformation could be applied to the UI. A UI panel could be sloped, or rotated into the screen, or otherwise given a bit rotation. Tying interaction regions into the canvas allows these transformations to be applied to them as well.
  • A canvas could draw to something other than the screen - like a hand-held PDA or an object in the world. The "mouse cursor" might actually be the crosshairs.
This is also why it is important that the position is relative to the region (and in the same coordinate system as the region) - the actual cursor position may have no bearing on the location of the region.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Added the code end of skinning support.

A skin works much like a CSS, in that you define properties at a global level, and those values are inherited by more specific levels unless overridden. The more specific levels are:
  • StyleFamily - this works like classes in CSS, it allows for UI elements to be classified into groups. For instance you might have "menu", "inventory" and "crafting" that alter what buttons and font are used in the different screens
  • UIWidget Type - define the properties for the different widget types , like buttons, labels, or whatever
  • Mode - For sub modes of UIWidgets. For instance, a button has a "hover" mode and "down" mode, in addition to the default mode. Basically this supports different widget states.
The actual properties supported are driven from the canvas operations - current there is:

Code:
    // Background drawing options (use for buttons, slot backgrounds, window backgrounds and so forth
    private Texture background;
    private Border backgroundBorder = new Border(0, 0, 0, 0);
    private ScaleMode backgroundScaleMode = ScaleMode.STRETCH;
 
    // The margin is the space between the region being drawn to and any drawn texture/text.
    private Border margin = new Border(0, 0, 0, 0);
 
    // Text drawing properties
    private Font font = Assets.getFont("engine:default");
    private Color textColor = Color.WHITE;
    private Color textShadowColor = Color.BLACK;
    private HorizontalAlign textAlignmentH = HorizontalAlign.CENTER;
    private VerticalAlign textAlignmentV = VerticalAlign.MIDDLE;
    private boolean textShadowed;
There's probably room for more options.

When using the canvas you just set the skin/family/widget type/mode, and then draw, and the canvas takes care of applying the style option based on those selections. Ideally the NUI Manager will automatically apply skin/family/widget type, so widgets themselves only need to concern themselves with setting the mode.

There is no asset for skins, although I put together a builder:

Code:
    UISkinData skinData = new UISkinBuilder()
        .setWidgetClass(UIButton.class)
            .setBackground(Assets.getTexture("engine", "button"))
            .setTextHorizontalAlignment(HorizontalAlign.CENTER)
            .setTextVerticalAlignment(VerticalAlign.MIDDLE)
            .setBackgroundBorder(new Border(1, 1, 1, 1))
            .setMargin(new Border(4, 4, 4, 4))
            .setTextShadowed(true)

            .setWidgetMode("hover")
                .setBackground(Assets.getTexture("engine", "buttonOver"))

            .setWidgetMode("down")
                .setBackground(Assets.getTexture("engine", "buttonDown"))
                .setTextColor(Color.YELLOW)
        .build();

    skin = Assets.generateAsset(new AssetUri(AssetType.UI_SKIN, "engine:defaultSkin"), skinData, UISkin.class);
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Added support for TextureAtlases. These take a texture and define a number of Subtextures. These subtextures can then be used in NUI as if they were textures.

TextureAtlases use a json format, and can contain grids of subtextures, or freeform textures (or both).

Code:
{
    "texture" : "engine:items",
    "textureSize" : [256, 256],  // Doesn't have to match the texture size, but is used to convert the absolute values in the rest of the definition to relative values
    "grid" : {
        "tileSize" : [16, 16],
        "gridDimensions" : [16, 16],
        "gridOffset" : [0, 0], // Grids can be anywhere in the texture
        "tileNames" : [
            "pick", "emptyVial", "whiteFluff", "cauliflower", "questionMark", "scroll", "apple", "emptyJug", "steelIngot", "coal", "ironOre", "", "", "skull", "whiteRecipe", "redSkull",
            "axe", "redVial", "redFluff", "redFlower", "bowl", "brownBook", "bananas", "waterJug", "copperIngot", "wick", "", "", "", "roundMeshedFlask", "redRecipe", "purpleSkull",
            "sickle", "orangeVial", "pinkFluff", "", "lantern", "redBook", "door", "", "goldIngot", "pole", "", "", "", "purplePotion", "yellowRecipe", "blueSkull",
            "hammer", "greenVial", "greenFluff", "deathCap", "scissors", "blueBook", "", "", "ironIngot", "", "", "", "", "skullOnStick", "blueRecipe", "cyanSkull",
            "knife", "purpleVial", "purpleFluff", "lavender", "", "", "", "", "", "", "", "", "", "", "", "greenSkull",
            "sword", "cyanVial", "cyanFluff", "", "candle", "", "", "", "", "", "", "", "", "", "", "yellowSkull",
            "bow", "blueVial", "blueFluff", "", "dynamiteStick", "", "", "", "", "", "", "", "", "", "", "orangeSkull",
            "crossbow", "blackVial", "brownFluff", "mandrake", "dynamite", "", "", "", "", "", "", "", "", "", "", "blackSkull",
            "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "whiteSkull"

        ]
    }
   "grids" : [...], // Can even have multiple grids
   "subimage" : {
       "name" : "window",
       "min" : [64, 64],
       "max" : [128, 96], // Only one of max and size is needed (size has priority)
       "size" : [64, 32]
    }
   "subimages" : [...] // Can have multiple subimages
}
The uri for subtextures is then built off the uri for the atlas. So if the atlas is "engine:items", the subtextures would be "engine:items.pick", "engine:items.cauliflower", etc.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Pottering about getting a feel for the next steps - widgets and layout.

I've added this method to the Canvas:

Code:
void drawWidget(UIWidget widget, Rect2i region);
Which sets the canvas using the style for the widget and creates a subregion for rendering it in. With this a widget can contain other widgets and draws them. This leaves a lot of flexibility, but generally we would probably have widgets for drawing things and widgets for laying out other widgets. Added a ColumnLayout widget that draws elements to a grid with some padding, and the result is that this code:

Code:
    private ColumnLayout grid;
 
    public void initialise() {
        grid = new ColumnLayout();
        grid.addWidget(new UIButton("Single Player"));
        grid.addWidget(new UIButton("Host Game"));
        grid.addWidget(new UIButton("Join Game"));
        grid.addWidget(new UIButton("Settings"));
        grid.addWidget(null);
        grid.addWidget(new UIButton("Exit"));
        grid.setPadding(new Border(0, 0, 4, 4));
    }
 
    public void render() {
        canvas.setSkin(skin);
        canvas.drawTextureRaw(Assets.getTexture("engine:menuBackground"), Rect2i.createFromMinAndSize(Vector2i.zero(), canvas.size()), ScaleMode.SCALE_FILL);

        canvas.drawWidget(grid, Rect2i.createFromMinAndSize((canvas.size().x - 280) / 2, (canvas.size().y - 192) / 2, 280, 192));
    }
(plus some skin building as above)

produces



(which also adapts to the screen size)

Next will be fleshing out the widgets and layout types, creating the top-level containers (Screen, Window and HudElement) and setting up the NUIManager (instead of testing the UI through the existing UI system)
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Starting to pull things together. Now have a working main menu, although the sub-menus it opens are not complete yet. Code looks like this:

Code:
    @In
    private GameEngine engine;

    @In
    private NUIManager nuiManager;

    public MainMenuScreen() {
        ColumnLayout grid = new ColumnLayout();
        grid.addWidget(new UIButton("singleplayer", "Single Player"));
        grid.addWidget(new UIButton("multiplayer", "Host Game"));
        grid.addWidget(new UIButton("join", "Join Game"));
        grid.addWidget(new UIButton("settings", "Settings"));
        grid.addWidget(new UISpace());
        grid.addWidget(new UIButton("exit", "Exit"));
        grid.setPadding(new Border(0, 0, 4, 4));

        ArbitraryLayout layout = new ArbitraryLayout();
        layout.addFixedWidget(new UIImage(Assets.getTexture("engine:terasology")), new Vector2i(512, 128), new Vector2f(0.5f, 0.2f));
        layout.addFillWidget(new UILabel("Pre Alpha"), Rect2f.createFromMinAndSize(0.0f, 0.3f, 1.0f, 0.1f));
        layout.addFixedWidget(grid, new Vector2i(280, 192), new Vector2f(0.5f, 0.7f));

        setContents(layout);
    }

    @Override
    public void setContents(UIWidget contents) {
        super.setContents(contents);
        find("singleplayer", UIButton.class).subscribe(new ButtonEventListener() {
            @Override
            public void onButtonActivated(UIButton button) {
                // Open
            }
        });
        find("multiplayer", UIButton.class).subscribe(new ButtonEventListener() {
            @Override
            public void onButtonActivated(UIButton button) {
                // Open
            }
        });
        find("settings", UIButton.class).subscribe(new ButtonEventListener() {
            @Override
            public void onButtonActivated(UIButton button) {
                UIScreen settings = new SettingsMenuScreen();
                settings.setSkin(getSkin());
                nuiManager.pushScreen(settings);
            }
        });
        find("exit", UIButton.class).subscribe(new ButtonEventListener() {
            @Override
            public void onButtonActivated(UIButton button) {
                engine.shutdown();
            }
        });
    }
Ultimately the menu will be defined in a layout file, so the contents will not need to be created in code. The subscription events will be though - effectively the screen class itself becomes the controller.

UIScreens are kept in a stack in NUIManager, with the topmost (last pushed) screen rendered. In the future this could be extended so a screen can determine whether a screen below should be rendered (and whether it should receive input). This technique works well for menus, as each screen can be popped to return to the previous screen, but something other than screens will be needed for other things.

Skins now have an asset file though, looks like:

Code:
"text-shadowed" : true,
    "font" : "default",
    "widgets" : {
        "ArbitraryLayout" : {
            "background-mode" : "scale_fill",
            "background" : "menuBackground"
        },
        "UILabel" : {
            "text-vertical-alignment" : "top"
        },
        "UIImage" : {
            "texture-scale-mode" : "scale_fit"
        },
        "UIButton" : {
            "text-horizontal-alignment" : "center",
            "text-vertical-alignment" : "middle",
            "background" : "button",
            "background-border" : {
                "top" : 2,
                "bottom" : 2,
                "left" : 2,
                "right" : 2
            },
            "margin" : {
                "top" : 2,
                "bottom" : 2,
                "left" : 2,
                "right" : 2
            },
            "texture-scale-mode" : "scale_fit",
            "modes" : {
                "hover" : {
                    "background" : "buttonOver"
                },
                "down" : {
                    "background" : "buttonDown",
                    "text-color" : "FFFF00FF"
                }
            }
        }
    },
    "families" : {
        "option-grid" : {
            "widgets" : {
                "UILabel" : {
                    "text-vertical-alignment" : "middle",
                    "text-horizontal-alignment" : "right"
                }
            }
        }
    }
}
For now I will continue reimplementing the main menus using NUI, and improving NUI as needed. Databinding will likely be the next major feature.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Implemented checkboxes:



Added some additional style options to support this - the ability to have fixed width and fixed height for an element, and how it will be positioned in a region that is larger than this size.

Also added Data Binding support. A binding uses this interface:

Code:
public interface Binding<T> {

    T get();

    void set(T value);
}
And then a UI element will use it like:

Code:
public class UILabel extends AbstractWidget {

    private Binding<String> text = new DirectBinding<>("");
 
    // ....

    public String getText() {
        return text.get();
    }

    public void setText(String text) {
        this.text.set(text);
    }

    public void bindText(Binding<String> binding) {
        this.text = binding;
    }

    @Override
    public void onDraw(Canvas canvas) {
        canvas.drawText(text.get());
    }
}
DirectBinding is a simple implementation that holds the value (doesn't actually bind to anything) - this name probably needs work.

At the moment the binding of these check boxes is done like:

Code:
find("bobbing", UICheckbox.class).bindChecked(new Binding<Boolean>() {
            @Override
            public Boolean get() {
                return config.getRendering().isCameraBobbing();
            }

            @Override
            public void set(Boolean value) {
                config.getRendering().setCameraBobbing(value);
            }
        });
This could be simplified with some general purpose bindings, like a BeanFieldBinding("fieldName", SomeType.class). Ultimately we would also have a concept of an EntitySource, which then provides bindings using a dot notation - changing the entity connected to the source would update all the bindings.

Next I'll be looking at dropdown lists, which will require some changes to how the canvas works.
 

msteiger

Active Member
Contributor
World
Architecture
Logistics
Great stuff! I've written a small UI screen to preview the generated world. It still uses old UI code, but I'm willing to port it as soon as NUI is ready for it.
I need the following items for that:
  • UISlider
  • Radio Groups (maybe based on buttons or checkboxes)
  • Tooltips would be nice, but I can live without
Maybe I can help with the implementation .. ?
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
At the moment it is very much in the stage of nutting out the general API and behavior, so it is probably tricky to collaborate right now. When things have firmed up a little, it would be good if you have a go at recreating the preview screen with NUI, including adding any UI elements you need that are missing, and letting me know any issues or thoughts you have on it.

On the elements you mention, UISlider is done (horizontal at least).


A radio group I envision would actually just be a special binding and reskin of check boxes where the "check" depends on whether the bound value matches a desired (so assuming you are using an enum, the binding would compare the current selection to a specific enum value). So the individual radio buttons would not be explicitly connected, just happen to bind to the same field.

Tooltips are a good idea.
 

msteiger

Active Member
Contributor
World
Architecture
Logistics
Ok, thanks for the info. Can I contact you directly regarding NUI stuff and bug reports? If so, how? Creating github issues might be overkill.

For example:
The Color constructor is supposed to check params. However, the check is done using OR not AND which is probably not intended.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Either post in this thread or start a direct conversation in this forum (click on my name and the 'Start Conversation'). And you are correct, should be AND not OR.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Implemented the bare bones of support for loading layouts from file. For instance, the main menu looks like:

Code:
{
    "type" : "engine:mainMenuScreen",
    "skin" : "engine:mainMenu",
    "contents" : {
        "type" : "arbitraryLayout",
        "contents" : [
            {
                "type" : "UIImage",
                "image" : "engine:terasology",
                "layoutInfo" : {
                    "mode" : "fixed",
                    "size" : [512, 128],
                    "center" : [0.5, 0.2]
                }
            },
            {
                "type" : "engine:columnLayout",
                "layoutInfo" : {
                    "mode" : "fixed",
                    "size" : [280, 192],
                    "center" : [0.5, 0.7]
                },
                "columns" : 1,
                "padding" : {
                    "left" : 0,
                    "right" : 0,
                    "top" : 4,
                    "bottom" : 4
                },
                "contents" : [
                    {
                        "type" : "UIButton",
                        "id" : "singleplayer",
                        "text" : "Single Player"
                    },
                    {
                        "type" : "UIButton",
                        "id" : "multiplayer",
                        "text" : "Host Game"
                    },
                    {
                        "type" : "UIButton",
                        "id" : "join",
                        "text" : "Join Game"
                    },
                    {
                        "type" : "UIButton",
                        "id" : "settings",
                        "text" : "Settings"
                    },
                    {
                        "type" : "UISpace"
                    },
                    {
                        "type" : "UIButton",
                        "id" : "exit",
                        "text" : "Exit"
                    }
                ]
            },
            {
                "type" : "UILabel",
                "id" : "version",
                "family" : "title",
                "text" : "Pre Alpha",
                "layoutInfo" : {
                    "mode" : "fill",
                    "region" : {
                        "min" : [0.0, 0.3],
                        "size" : [1.0, 0.1]
                    }
                }
            }
        ]
    }
}
The notable part is that any type which is a subtype of UILayout<T extends LayoutHint> will load each of the widgets listed under contents, and the information in each widget's "layoutInfo" section as the appropriate LayoutHint class for the layout, and add them with:

Code:
void addWidget(UIWidget widget, T layoutHint);
Otherwise the structure of the file is just a hierarchy of UIWidgets. Typically the top most widget (the UIScreen) acts as the controller, and then there is a series of nested layouts culminating in the individual widgets.

As part of this work I've put together an new framework for TypeHandlers - these handle how types are serialized and deserialized. The new framework is implementation agnostic, meaning the same TypeHandler can be used for both Gson and Protobuf. This still needs to be integrated across the rest of the engine.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Things have been a bit slow going - it has been too hot to have my computer on. I have put together a new layout that allows defining the layout of contents relative to either each other or the overall canvas:

Code:
    "type" : "relativeLayout",
    "contents" : [
        {
            "type" : "UIImage",
            "image" : "engine:terasology",
            "id" : "title",
            "layoutInfo" : {
                "width" : 512,
                "height" : 128,
                "position-horizontal-center" : {},
                "position-top" : {
                    "target" : "TOP",
                    "offset" : 48
                }
            }
        },
        {
            "type" : "UILabel",
            "id" : "subtitle",
            "family" : "title",
            "text" : "Select World",
            "layoutInfo" : {
                "height" : 48,
                "position-horizontal-center" : {},
                "position-top" : {
                    "target" : "BOTTOM",
                    "offset" : 16,
                    "widget" : "title"
                }
            }
        },
        {
            "type" : "UIButton",
            "text" : "Back",
            "id" : "close",
            "layoutInfo" : {
                "width" : 128,
                "height" : 32,
                "position-horizontal-center" : {},
                "position-bottom" : {
                    "target" : "BOTTOM",
                    "offset" : 48
                }
            }
        },
        {
            "type" : "UIList",
            "layoutInfo" : {
                 "width" : 512,
                 "position-horizontal-center" : {},
                 "position-top" : {
                     "target" : "BOTTOM",
                     "element" : "subtitle",
                     "offset" : 32
                 },
                 "position-bottom" : {
                     "target" : "TOP",
                     "element" : "close",
                     "offset" : 64
                 }
            }
    ]
This creates a screen with a Title in a fixed position from the top of the screen, a Subtitle hanging under that, a Back button at the bottom and a list using the remaining space in between.

I have also moved some of the existing menus over to using layout files - getting a feel for the advantages and disadvantages. The layout files feel a bit more verbose than code, but you can update them at runtime and the change takes effect next time the UI is loaded - even when not running in debug mode (although you need to build the project so the layout gets copied onto the classpath).

My plan after Christmas is to finish off the remaining screens for the main menu, do a cleanup pass over the whole framework, and then drop the old GUI system from the main menu entirely. At this point I will commit to develop, and the code will be ready for the MigLayout - if you are up for it synopia . Then I will begin replacing the in game UI (I also want to get the console available in the main menu).
 

metouto

Active Member
Contributor
Art
Immortius .... you do sleep and eat right ???? :coffee: .... you do amaze me with what you do :thumbsup:
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
Immortius,

As a first project, I was playing around with adding a 2d view of the surrounding blocks (aka dwarf fortress view) on the HUD which is starting to look a lot like a minimap. I initially wanted to make it a separate screen, but decided it made more sense as a HUD display element.

After I got working yesterday, I realized there is a need for the ability to add/remove/access hud elements in a generic way in order to make this modular. I was considering creating a patch to support this api when I saw this NUI thread. You mentioned HudElements earlier. Is this what you meant -- an api for adding/removing/accessing various DisplayElements on the hud?

I'm new to github, and while I tried searching in the Terasology network graph under immortius/Terasology, I couldn't find any newer forks from you after July in order to look at the HudElements code you mentioned.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Hi Mike,

That is exactly right. NUI (New UI) is a complete replacement for Terasology's UI framework that is designed to be both mod friendly and (hopefully) easier to use than the existing UI framework. Part of NUI is in the 'develop' branch right now, but currently disabled. The rest of it is on my PC (I will push into my repo shortly, been delaying it too long). In neither case does it extend to in game yet - but soon hopefully.

What I'm going for regarding HUD elements is for modules to be able to register additional HUD element through the manager. Additionally I think it would be good if the positioning of those HUD elements is held in config, and ultimately editable at runtime - so if someone wants to put their health bars on the side of the screen rather than the bottom they can do that. I haven't even begun on any of this yet though - been focusing on the main menu and the overall framework to start with.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
I hope you don't mind, but I'd like to start working out the HUD element api now, and program against it -- I can write a simple version for the current non-nui gui and see how well it works in practice.

I'd like to see it go beyond just repositioning. I'd like to be able to disable, reenable, hide, display, and replace existing hud elements as well as fetch the hud element instance to call other methods on them. disable would stop elements from receiving events whereas hide would not render it. I think we would also want the ability to enable/disable keyboard bindings to these elements as well.

As an example, have the ability to switch between the core "imp view" hud and a "dungeon keeper" hud for giving orders to creatures.
 
Top