Tweaking NUI

Immortius

Lead Software Architect
Contributor
Architecture
GUI
I hope you don't mind, but I'd like to start working out the HUD element api now, and program against it -- I can write a simple version for the current non-nui gui and see how well it works in practice.
Sure, it would be interesting to see what features you are thinking of. I'ld suggest that hud elements should be identified by a SimpleUri. But at the same time, it is unlikely we will incorporate further updates to the old UI system, so keep that in mind (your work would be proof of concept and prototyping only, essentially). Also there are some major differences in approach between NUI and the old UI, so some of your concerns may not even be an issue.

I'd like to see it go beyond just repositioning. I'd like to be able to disable, reenable, hide, display, and replace existing hud elements as well as fetch the hud element instance to call other methods on them. disable would stop elements from receiving events whereas hide would not render it. I think we would also want the ability to enable/disable keyboard bindings to these elements as well.

As an example, have the ability to switch between the core "imp view" hud and a "dungeon keeper" hud for giving orders to creatures.
I was specifically highlighting that positioning would be configurable by the user. Not that would be the extent of the capabilities of what would be available for code.

On the specific features you mention...
* Disable/Enable: NUI currently uses "focus" to determine whether to send key input to an element, so that end of thing is taken care of I believe? If you mean something other than input events, I believe UI should be databinding driven, not event driven - ideally once set up a HUD element should take care of itself for the most part and not need to be updated by events or method calls.
* Hide/Display: Visibility should be a property of all widgets, whether top-level like the HUD element, or low level like a label. I would suggest the HUD element should hide/show itself based on a combination of internal logic and a databoundable property that determines visibility, if at all possible. Actually, I should just add this to NUI next. Visibility and receiving input are tied together - an invisible element isn't rendered, and thus doesn't "draw" interaction regions, so it cannot receive interactions. This makes sense to me because an invisible UI element shouldn't intercept clicks or input.
* NUI widgets don't support keybinding at the moment. What are your thoughts on it being necessary? I think it is probably useful for some things, like having the whatever is bound to opening the console also closing it. At the same time this could be handled by a component system.
* Replacement makes sense, this is further enable by the use of JSON assets for defining UI elements - you can replace a UI element using the standard overrides capability of Terasology's module system without any code, if it is defined as an asset. This obviously is problematic if multiple modules want to replace the same element, but that would be the same through code.
* Multiple HUDs or HUD setups is interesting, will meditate on that.

Additionally I'ld like to have an extension point system so modules can add into existing elements in predefined ways without having to replace the entire element.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
But at the same time, it is unlikely we will incorporate further updates to the old UI system, so keep that in mind (your work would be proof of concept and prototyping only, essentially). Also there are some major differences in approach between NUI and the old UI, so some of your concerns may not even be an issue.
I'm fine with doing prototyping/research/proof-of-concept stuff that will never be merged in. At this point, my knowledge of the Terasology architecture and implemented functionality is myopic or blind in many places, and I'll probably end up reinventing some nice square wheels to replace existing round ones. :)

* Disable/Enable: NUI currently uses "focus" to determine whether to send key input to an element, so that end of thing is taken care of I believe? If you mean something other than input events, I believe UI should be databinding driven, not event driven - ideally once set up a HUD element should take care of itself for the most part and not need to be updated by events or method calls.
I was thinking both keyboard input as well as events. More on input below. As for events, if everything is data-binding driven, the underlying component system for the HUD Element can make the decision of whether an event should be processed as it can determine how the HUD element is being used at the time (visible, hidden, disabled/unused). So not an issue in NUI.

* NUI widgets don't support keybinding at the moment. What are your thoughts on it being necessary? I think it is probably useful for some things, like having the whatever is bound to opening the console also closing it. At the same time this could be handled by a component system.
Consider again my minimap HUD Element which toggles between axes (plural of axis - not a hatchet) using the Z key. Now someone else replaces my minimap with no Z key functionality. If the key input is only sent to the minimap when the minimap is visible, again, not an issue in NUI.

Additionally I'd like to have an extension point system so modules can add into existing elements in predefined ways without having to replace the entire element.
This sounds like something beyond the HUD Element system. A different API for the UIWidgets in NUI to force subclassing to be easier, maybe. I'm interested in hearing if you have a different approach in mind.



Last night, I converted the old GUI HUD to a HUDElementManager. After removing some unused methods that might eventually come back and switching to SimpleUri, this is what it would have looked like cleaned up.

The API below was sufficient to convert the current GUI HUD into a HUDElement system.
I have not yet tried to move the new classes for HUDElementBubbles/Crosshair/Toolbar/etc into the Core Module (or some other HUD module) instead of leaving them in engine, but the system was sufficient for this so far, and it seems like it would be sufficient so far as there was a way to globally fetch the HUD object.

Code:
public interface HUD {

    Collection<? extends HUDElement> getHUDElements();
    void addHUDElement(HUDElement hudElement);
    void removeHUDElement(HUDElement hudElement);

    // Not needed yet, but would be needed for Modules -- really a convenience method for getHUDElements().
    // public HUDElement getHUDElementById(StringUri hudElementId)
}

public interface HUDElement {

    StringUri getId();

    // Called when added to HUD -- probably should be named onAddToHUD() or something similar.
    void initialise();

    // For GUI, these are added to the HUD screen's display elements
    // This is our only direct dependency on GUI in the API so far.
    List<UIDisplayElement> getDisplayElements();

    // Called when added to HUD -- probably should be onAddToHUD();
    void initialise();

    // Would be obsoleted in NUI, but doesn't directly add a dependency to GUI.
    void update();

    // might need more lifecycle points to be flexible -- before add, after add, before remove, after remove.
}
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
I've now moved the Minimap UI objects and HUD objects into a separate module and debugged it.

I noticed that I hit a couple of security exceptions right away. The first is probably a minor issue not related to NUI.

Class NullEntityRef was in internal, which made EntityRef.NULL unusable in a module. Moved it out of the .internal package to fix that one.

The second was that in order to draw UI elements from a module, I had to allow access to org.lwjgl.opengl, specifically for GL11, in TerasologyEngine.java: moduleSecurityManager.addAPIPackage("org.lwjgl.opengl");

I guess the question is how much is a Module going to be allowed to do with graphics? And maybe GL11 as class access is sufficient rather than allowing access to the entire package.

Once I fixed those two security restrictions, the gui Minimap module builds and runs using the new HUD Element api (although I still need to clean it up to use SimpleUri instead of String in my own code).

Well, I still need to determine how I can commit this to a different git branch than my original non-HUD-Element non-Module Minimap which is now in my HEAD git fork, but that's probably a project for another day.

Merry Christmas!
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Yeah the modules are sandboxed away from stuff like files and OpenGL. We need to expose that properly through the modding API, probably in this case via NUI, and keep direct use of OpenGL off limits :)

Thanks and keep up the good work!
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Yeah, essentially this is one of the drivers for NUI. I had a go at moving the Inventory code into a module, but because the current UI uses direct OpenGL to render this would require modules direct access to lwjgl. Besides breaking encapsulation and making it harder to move away from lwjgl, this access is dangerous as a mismatched gl call will crash Terasology badly.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
So the main menu has now been replaced with a pure NUI main menu. Let me know of any issues.

Towards the end there was some fiddling with exactly how size of a widget is calculated. The principal that a widget is given a space to draw in by its container still stands, but widgets now have methods to calculate the maximum and minimum preferred size of their content. The canvas then has a set of methods to calculate sizes of widgets using these methods plus skin settings.

The upshot of this is that on the initial menu screen, the ui layout merely states that the list of buttons should sit between the version label and the bottom of the screen, and be horizontally centered. The actual size of the buttons comes from the skin, and the column layout the live in calculates its size from its content.

This approach still needs to be applied to the other layouts though.

Other outstanding issues that will wait until after ingame integration:
* Tooltips
* Tabbing between fields (and focused appearance mode for most widgets - this is likely important for accessibility at the very least)
* FontColor support for drawText()
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Thank you for this awesome next big step - the new menu looks fantastic! :omg:

I've started looking for issues here and there and putting them on GitHub. With NUI merged in I figure it is mature enough to let the masses at it, with both small bug fixes and starting to apply it to the modules that have been screaming for widgets? :)
 

synopia

Member
Contributor
Architecture
GUI
From your list, I vote for Tooltips ;) They would help to use the BT editor a lot, I think. But its not time critical, so just do what is most fun for you :-D
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
From your list, I vote for Tooltips ;) They would help to use the BT editor a lot, I think. But its not time critical, so just do what is most fun for you :-D
Without tooltips, the inventory system is very difficult to use. Hopefully the new system will be able to show tooltips for the toolbar all the time.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
I think UIText also needs font size (probably just Font in general).

Also adding method-chaining for RowLayoutHint and RelativeLayoutHint. It's only there for HorizontalHint and VerticalHint currently.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
I guess I should start some overall documentation for NUI.

Nice UI (NUI)

Overview

NUI is a UI framework developed specifically with the needs of game UI in mind - dynamic elements, reflective of game state and adaptive to different resolutions.

Principles

Rendering implementation independent

To allow flexibility in rendering implementation, all of NUI's rendering is driven through a Canvas which provides the primitive methods for rendering. This allows the rendering technology to be changed easily.

Style applied through Skins

A hierarchical skin definition, similar to Cascading Style Sheets (CSS), is used to define the style for each displayed element. The Canvas automatically applies many of these settings, such as drawing the background and applying any margin. This provides a number of benefits:
  • Skins can be updated or switched at runtime, instantly changing the appearance of the UI elements that use them
  • Widgets themselves can concentrate on how they behave and what they draw - not how it appears. Common appearance options like fonts, colors, backgrounds and margins are handled automatically by the Canvas.
  • Reduces the configuration needed per widget. Rather than having to apply a font, background, text alignment, and text color for ever label in a table you can define a style family with those settings in a skin once, and then configure all those labels to use that family.
Databinding

Widget properties can be bound directly to a field elsewhere, keeping it synchronized with that field. This once again reduces the code required to hook up the UI - rather than having to push changes into a widget, and then subscribe to widget events to put changes back, you can bind a widget property directly to the widget and the rest is taken care of.

Layout Assets

While optional, the layout of UI elements can be defined in a JSON format, and then the element built from this definition. The Control Widget - the root widget of the layout - is given the opportunity to do any data binding, event subscription or other work to provide the control of the UI.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
I've made a decent first pass at implementing NUI in awt with a couple of JFrame constructs added in, and the menu screens run with a few visual glitches. One thing I noticed was that button events don't include the mouse location, but instead poll for the mouse location later. It seems like that might return a different mouse location than at the time of the button up/down event.
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
We'll add more badges later :giggle:
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
I've made a decent first pass at implementing NUI in awt with a couple of JFrame constructs added in, and the menu screens run with a few visual glitches. One thing I noticed was that button events don't include the mouse location, but instead poll for the mouse location later. It seems like that might return a different mouse location than at the time of the button up/down event.
Good point - I don't know what I was thinking, the mouse position is right there in the event. :) Although... Even with this "polling" tooltips lag behind the mouse position, so I guess lwjgl doesn't update its mouse position until you process the input queue.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Skinning

In NUI, a skin is a collection of styles which are applied when rendering widgets. Styles have the following settings:

Background Options

background - The texture region drawn behind a widget
background-border - The size of the border of the background. The border is drawn unscaled around the edge of the widget.



background-scale-mode - The method to use to scale the background when a widget doesn't match the size of the background region. If the background has a border, this applies to internal region of the background The options are:
  • Stretch - The image is stretched non-uniformly to fill the area.
  • Scale Fit - The image is scaled uniformaly so that the image touches the edges of the draw area, without exceeding it.
  • Scale Fill - The image is scaled uniformaly so that the image fills the draw area. Parts of the image extending past the draw area are cropped.
  • Tiled - The image is tiled to fill the area, without stretching or scaling it. The tiling is centered.
Size Options

fixed-width - Fixes the width with which an element will be drawn.
fixed-height - Fixes the height with which an element will be drawn.
min-width - Set a minimum width for an element.
min-height - Set a minimum height for an element.
max-width - Set a maximum width for an element - it will not grow beyond this size to fill space.
max-height - Set a maximum height for an element - it will not grow beyond this size to fill space.
align-horizontal - If an element has more space available than its maximum width, this is how it will be aligned (left, right or center).
align-vertical - If an element has more space available than its maximum height, this is how it will be aligned (top, bottom or middle).

Content Options

margin - The space between the edges of the element and any content. Often this is the same or larger than the background-border so the content fits inside the border.
texture-scale-mode - For some elements, is used to determine how to scale any texture content.

Text Options

font - The font to use.
text-color - The base color of the text.
text-align-horizontal - The horizontal alignment of text content.
text-align-vertical - The vertical alignment of text content.
text-shadowed - Whether the text should have a shadow (true/false).
text-shadow-color - The color of the text's shadow, if any.

Style Hierarchy

Skins define a hierarchy of styles, with more specific styles inheriting and overriding from the broader styles.

Base - The base style defines the core settings for all styles.
Family - A family is a string that is used to provide a different appearance to elements using the same skin. For instance, a screen might have normal buttons, and "flashy" buttons using a different background and font.
Element- Provides settings for different types of widgets.
Part - Some widgets have subparts. For instance, a dropdown has the main entry, and then the list that appears when it is clicked on. All widgets have a default "base" part which can be used to add settings for how the main part of the widget is rendered without affecting the subparts.
Mode - Widgets may have different modes can have - such as when the mouse is hovered over them, or if they are being clicked on. This allows styles to differ depending on the state of a widget.

Code:
Base
  +-Element
  |  +-Mode
  |  +-Part
  |    +-Mode
  +-Family
    +-Element
      +-Mode
      +-Part
        +-Mode
Skin Inheritance

One final feature of skins is they can inherit each other. So if you want to make a custom skin that is mostly like the default skin, you can do:

Code:
{
    "inherit" : "default",
    ... // Any addition or changes.
}
Console Commands

If you want to work on a skin while the game is running, you can use the "reloadSkin <uri>" command to reload it immediately. You will need to ensure the changed skin file has been deployed - in IntelliJ this means using "Make Project...". Any UI element using the skin will be immediately updated. Note that if you change a skin that is inherited by other skins, you will need to reload those skins as well before they will be updated.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
Good point - I don't know what I was thinking, the mouse position is right there in the event. :) Although... Even with this "polling" tooltips lag behind the mouse position, so I guess lwjgl doesn't update its mouse position until you process the input queue.
Even if the input system wasn't providing the mouse location as part of the event, you could poll for it while creating a NUI event instead of trying to get it at some future point. I'm not sure how a queue of events would ever end up with different mouse locations if the polling is done later than the event generation.

I have quite a few more improvements for supporting headless / awt subsystems and abstracting out lwjgl.

Would you prefer I continue to submit them in tiny incremental PRs, or larger semi-related ones?
If I did a larger set, I could go through and comment on each code change.

I merged my latest changes with develop yesterday, but trying to keep things merged gets more painful as time passes between the two branches.

Some of the things I've done that I think are ready for merging:
  • Rename Display to DisplayDevice to match MouseDevice and KeyboardDevice.
  • Remove some of the methods from DisplayDevice
  • Make WorldRenderer into an interface and provide a registered factory to create it.
  • Have component system actually pull isHeadless from DisplayDevice for registering system. I'm wondering if we could change @RegisterSystem so that we have both a client/server property as well as a headless/lwjgl/awt property for identifying systems rather than client/server/none.
  • Replace a number of Sys references with an EngineTimer
  • Fixed a couple of bugs with Headless classes
  • Put a limiter on Timer.tick() so that we sleep for 1000 ns if the delta is 0. This greatly drops the cpu load when nothing is happening.
  • Refactor Time into a BaseTime class since all but getRawTimeInMs() is unchanged between implementations.
  • Also a bunch of awt support that I'm not yet ready to PR.
EDIT 3:

Is this intentional? Was state.getRelativeRegion() supposed to be createFromMinAndSize? This changes a 128x512 into a 129x513
Code:
        public Rect2i getRelativeRegion() {
            return Rect2i.createFromMinAndMax(0, 0, drawRegion.width(), drawRegion.height());
        }
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Would you prefer I continue to submit them in tiny incremental PRs, or larger semi-related ones?
If I did a larger set, I could go through and comment on each code change.
Whatever makes sense. I am mostly concerned when a PR contains a lot of unrelated changes, or snowballs (as issues are fixed new features get added with potential new issues).

  • Have component system actually pull isHeadless from DisplayDevice for registering system. I'm wondering if we could change @RegisterSystem so that we have both a client/server property as well as a headless/lwjgl/awt property for identifying systems rather than client/server/none.
I don't think individual systems need to be registered automatically based on differing implementations, so lwjgl/awt should not be options (they can be registered manually by the renderer). The actual options are a combination of whether there is a display (headless or not - or client/listen server/singleplayer or not), whether the machine is authoritive (server/singleplayer or not) and whether the machine is a remote client or not. The AWT implementation may count as headless for this purpose, because it doesn't render the normal way.


Is this intentional? Was state.getRelativeRegion() supposed to be createFromMinAndSize? This changes a 128x512 into a 129x513
Code:
        public Rect2i getRelativeRegion() {
            return Rect2i.createFromMinAndMax(0, 0, drawRegion.width(), drawRegion.height());
        }
I'm pretty sure that is a mistake, I almost exclusively use MinAndSize.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
Based on my awt implementation of NUI, I would like to propose moving some code from Canvas to CanvasRenderer.

I had originally created an AwtCanvas by copying CanvasImpl and worked from that point. However, now that it's finished (except for Mesh support, color for Textures), I find that it's almost identical to CanvasImpl.

By moving some code from Canvas to CanvasRenderer I don't need a custom Canvas, which probably means that no implementation would.

drawTextInternal --> crop; renderer.drawText(); reset crop;
drawTextureInternal -> crop; renderer.drawTexture(); reset crop;
crop -> renderer.crop()

Some changes as a result of this.

textureMat, cachedText, usedText get moved into the renderer. billboard was already there and is no longer needed in canvas.

drawTextureInternal is only cropping for one specific case of ScaleMode. It seems to me that it should crop for all of them. This was also the only place we conditionally cropped based on equality to the state crop state, so I'd say we should drop the condition as noted above.

drawTextInternal crops differently based on the font mesh. This is the only reason why we need to pass the cropRegion to the renderer as a method argument. I would suggest that you cache the crop parameters in renderer.crop() and retrieve it for cropping fonts, but I don't want to push my luck. :) In any case it shouldn't hurt to do the textureMat crop change before and after calling drawText and provides consistency. Perhaps textureMat croppingBoundaries should only be set on a call to renderer.drawTexture() from cached crop values like I suggested for fonts, but now we're moving into my area of opengl ignorance.

TextCacheKey, cachedText, usedText get moved into the renderer. The cleanup for these goes at the top of postRender(), which leaves cleanup handled identically to how it is now.

I will be creating a PR for this soon.

Edit 1:

What does the color argument do for textures? I looked at uitexture_vert.glsl, and after reading the docs for gl_FrontColor, I still don't understand what it is supposed to do, or how to provide the equivalent in drawing an awt bitmap.
 
Top