More version increments to avoid dependency issues

Florian

Active Member
Contributor
Architecture
Currently we increment the version number which each stable release. Currently our version number is "0.50.1" (see engine/src/main/resources/engine-module.txt).

While this ensures that dependency management works well for stable releases it often does not work during the development. e.g. going back in version history to find the cause of a regression is almost impossible as the modules got updated to work with the new engine version. Even just developing on a branch that is not yet rebased might cause issues.

I suggest that we increment the second part of the version number of modules and engines when:
* A dependent module has to be updated because of an API change
* When a dependency incremented it's version number.

e.g. You rename in a pull request an engine 0.50.1 class that gets used by module N 0.14.0 and module M 0.3.0

In that same pull request you would then also have to increment the engine version number to 0.51.0 and the version number of module N to 0.15.0 and the version number of module M to 0.4.0. In addition you would have to update the engine dependency of all modules to < 0.52.0 by setting the exlusive maxVersion field to 0.52.0.

Ideally this version bumb should be either automatic or should be doable via a script. I mean we have to get all modules to check if they work anyway after an API change. Incrementing the version number should be no issue if we have all modules checked out and building.

When we want to do a stable release we just use the version number as it is or increment the third digit if we already used that version number in a stable release.

e.g. if our last stable release was 50 and our version is alreadz at 0.53.0 then we release the stable version 53.

If there was no need to increment the 2nd version number since the last release we can simplz increment the third number. e.g. if the version number was 0.53.0 and our last stable release was 53 then we just release a stable release 53.1 with the engine version number set to 0.53.1. Or we call our releaes simply add a "0." infront of the stable releases to make them match the engine number.

If a module needs a feature or bug fix that got introduced in 0.53.1 it sets it's minVersion field to 0.53.1

If we release a 0.53.2 with new features or bug fixes that doesn't break the API then the moduel that has the verison requirement >= 0.53.1 and < 0.54.0 will continue to work.
 
Last edited:

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Hmm, I'm wondering if part of the problem is developing modules against the development version of the engine at all. There can be no guarantee on the stability of the API in a development state, only on particular releases.

I agree it is a problem that modules continue to be considered compatible despite breaking changes at the moment. I think on the gestalt-module end of things I will change the default maxVersion (exclusive) for a given version below 1.0.0 to be the next minor version, which aligns with what you are stating and semantic versioning. I would suggest on Terasology's end the implicit dependency on engine be removed from module handling, and all modules would have to explicitly state the version they depend on, or depend on something that depends on a engine.
 

Florian

Active Member
Contributor
Architecture
hmm, and how would you want to get away from "developing modules against the developemnt version"? When we change an used API the modules won't work with the engine till they get updated. We could delay the updating ofthe modules till the next engine release but typically it is easiest to change engine and modules at the same time. Especially since it is often possible to adjust API usage and API definitions with IDE refactorings at the same time.

About maxVersion: I would assume that typically the engine dependency of a module will have a mosty constant minVersion and an constnatly increasing maxVersion. So I think a functionallity that is simply spoken "maxVerison= minVersion+1" isn't needed. Instead I would suggest that we make the maxVersion field mandatory(ideally after adding it to all modules).
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
hmm, and how would you want to get away from "developing modules against the developemnt version"? When we change an used API the modules won't work with the engine till they get updated. We could delay the updating ofthe modules till the next engine release but typically it is easiest to change engine and modules at the same time. Especially since it is often possible to adjust API usage and API definitions with IDE refactorings at the same time.
To be clear, modules are not part of the Terasology engine project and it isn't required for the engine developers to have every module project checked out as they make changes. I never do. And I don't expect module developers to be updating engine with new features either - module development requiring constant changes to engine is indicative of a failure of our modding api.

Ideally I think we would separate the versioning of engine from the versioning of Terasology releases. We would make changes to engine and release that as a version. Modules that are being maintained would then be updated to depend on the new version of engine. Finally a Terasology release would be generated including a given version of engine and a set of modules compatible with it.

If someone is developing against the snapshot version of engine, then I consider them to be making a choice to deal with the instability of the api. The same as using a SNAPSHOT version of a library.

About maxVersion: I would assume that typically the engine dependency of a module will have a mosty constant minVersion and an constnatly increasing maxVersion. So I think a functionallity that is simply spoken "maxVerison= minVersion+1" isn't needed. Instead I would suggest that we make the maxVersion field mandatory(ideally after adding it to all modules).
I'm not inclined to make it mandatory in gestalt-module - in theory if Terasology, or any other project using gestalt-module had a stable API it would be very satisfactory to use the default maxVersion until the API broken, and then specify a new maxVersion as necessary. If you want specify it explicitly for all modules go ahead, it being optional doesn't prevent that.
 

Florian

Active Member
Contributor
Architecture
hmm, yeah, maybe we should sperate engine and terasology releases (with modules).

Then we can do a engine release with a increment of the 2nd version number to indicate a broken API and a increment of the third number whenever we add a feature to the engine that is needed by a module.
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Lovely topic, one close to my heart! :)

I've written about this on and off, but probably with equal parts buried good ideas mixed with lengthy rants. Here's a fresh round of scripture.

My desire for some time now has indeed been to get off snapshots and closely tied engine versions <=> "stable releases". The dynamic builders are a large piece of the puzzle in moving forward. Currently honestly the only reason stable release 50 == 0.50.0 engine version is because we bump the version manually and I happen to do so when doing a big round of testing and consider most our stuff is stable - it isn't a formal thing, I just figured make the numbers match till we make them more formal :D

What I think we want to happen is instead bumping the patch level (x.y.z) much more frequently after merging a few PRs that don't break backwards compatibility. Since we're still below release 1.0.0 the rigid rules of SemVer technically don't apply yet but I think we can indeed use it with minor (breaking) and patch (compatible) until we go 1.0.0. Same goes for modules.

The key for me is to do the versioning separately from any PRs / direct commits as it would be a pain trying to manually bump the version in that fashion (imagine multiple outstanding PRs or an oops commit that breaks something). Instead just merge to to the develop branch and if an author is happy to do a component release (not a game release) they do so by running a job in Jenkins with a single scope parameter - major, minor, patch. Or ask @Gooey to do it for them on IRC or Slack. More on that later!

Right now I've got the scope parameter set up in Jenkins on the stable engine job and a test module job. It sadly just doesn't do anything yet, other than publish to a non-snapshot repo in Artifactory. I manually commit the engine version bump when I do a stable game release. The release job in Jenkins naturally is meant to do the version bump for you, and IMHO also should do the Git push from develop -> master for you too. We should never need to manually push to master and everything should depend on release builds that have come out of master.

So say we get that in place for engine and all modules (and libs too where it makes sense). Handling module compatibility is a fun topic. I would also like to follow SemVer to the letter so we don't need to worry about max version. Here's a scenario touching on multiple points:
  • Module x is currently at 1.5.3 and its author decides to rename a class that's part of its public API with multiple users. It needs to be bumped to 2.0.0.
  • Author makes a PR with just the change (no version tweak) or commits directly to the module's develop branch. Jenkins builds a snapshot, posts stats
  • Author confirms the change as expected, runs the release job in Jenkins with scope "Major" - Jenkins pushes develop to master, bumps version to 2.0.0, builds, bumps version to 2.0.1, pushes to develop for next snapshot
    • This also gives a chance to catch changes of incorrect scope. Had the author intended a minor release but noticed during testing the change that it breaks backwards compatibility simply run the release build with the scope set higher (or outright revert it which is fine while it is just a snapshot)
  • Nothing breaks anywhere - the modules using 1.5.3 know they are not allowed to use 2.0.0 (no max version set, just the rule that next major may break compatibility)
Now at this point we have two challenges:
  1. A lot of changes may require a major version bump yet some dependent modules may work perfectly fine. How to find that out easily?
  2. How do we determine for a game release what module versions to include?
This is where I'm seeing potential for the automatic pull request testing which is now live (yay @msteiger !) + expanded options in the module index/manager. I want to build every dependent project we know of when something upstream changes - at all. This is why we needed the dynamic builder droplets as that's a lot of building :D

The exact details escape me a little, along with the setup in Jenkins, but it goes something like this:
  • Release jobs (master branch) only ever build on manual request - that may sound like work but I want to use @Gooey to make releasing easy. Just tell him on IRC or Slack "Promote module x as major release" or so and Jenkins will take care of the rest.
  • Snapshot builds (develop branch) run on commit just like now (this is all we have had so far)
  • Pull request builds run on creation of a PR (if whitelisted / approved by admin) - report the usual code metrics (we have this now!)
    • On completion of a PR build any immediate downstream module dependent on the module (or engine) built will itself run a throwaway build to do a compile and code metric test against the update. This will help catch situation where we break things without having to rely on a mega-workspace (I'm at 67 modules and rising - will become unrealistic soon, takes longer to build the Gradle project tree than actually executing tasks)
    • Additionally or alternatively to the above run as a consequence of a release build finishing (but at that point the genie is already out of the bottle)
    • Should another series of throwaway builds run in response to snapshot builds? If every change goes in via PR we wouldn't need this (would just repeat the PR-triggered builds)
Why do the throwaway builds? First to know when we break stuff, which is valuable information, especially before we release a breaking change. Even if we are fully aware that we're committing a change that's incompatible with some stuff, and are guarding against it with a major release bump, knowing exactly what breaks allows us to prepare updates faster to get everything working again.

Secondly, and this is where it gets geekily interesting to me, the throwaway build tells us whether a downstream project may work unchanged with the next major version of something. But we can't just go on that and automatically include the newer build in a game release (current problem). And testing every module individually to see if it does indeed work and then do a re-release just to flag it compatible with a newer engine (or upstream module) would both suck and litter our repos. Instead:
  • On successful compilation against a newer upstream major release mark the module as "Compiles OK!"
  • On successful unit tests + other code metrics mark as "Compiles with good code health!"
  • On successful integration tests / other deeper automated tests (TWOVANA) mark as "Passed automated game tests!" (automated acceptance test level I guess)
  • On successful manual acceptance tests (somebody actually played it and reported it as fully functional) mark as "Play tests OK!"
  • Finally if somebody actually changes the module then consider formally assigning upwards compatibility in the version file. But no Git commit needed until here.
The first four stages would be recorded outside of the repo and I'm not sure exactly how. Jenkins could annotate build jobs somehow, update a central database, edit module threads in the forum, or even batch-update the Module Index (that could get Git-spammy so batch the changes). The data could then be presented on a module tracker site, shown in the launcher when browsing modules, or viewed otherwise when we're considering doing a full game release including all Omega modules.

Naturally the above is a whole lot of work and not at all something we'll have any time soon. But we could start small and move toward that future. First off we should just get to where we release things and follow SemVer. Perhaps we should even make hitting Alpha (architecturally stable but not necessarily gameplay ready) == going 1.0.0? So we can go all SemVer.

On the technical side this involves some needed improvements:
  • Script for the release jobs to do some Git pushing and updating of version files. Next on my list!
  • Probably a single central release orchestrator job is needed to do the Git push before the release job actually starts, that way it can correctly report the actual changes present in that build via Git history. Only admins could run this.
  • Authors that are not Jenkins admins can use Gooey commands instead (which in turn runs the release orchestrator). There is a Hubot role system we can use to assign modules to users so they can run releases for only a subset of modules
  • Groovy script to update module jobs to have job dependencies matching their module.txt dependencies (I've got a proof of concept for this working) so the correct downstream jobs can be triggered when needed
  • Central point for storing results for the fancy throwaway builds we can display/use later in the process (fairly easy to use the Index and that was my intent - although just for releases).
  • Determine how we want to define compatibility bands for modules when both v1.0.0 and v2.0.0 will work fine for a module just declaring a dependency on min version 0.1.+. Set max version to 2.+.+ ?
  • How to handle changes affecting multiple modules at once?
  • Launcher still isn't updated for handling the new Distros, let alone beginning to show module release info (@Skaldarnar + @shartte ping!)
If we can get the module manager/browser working (intended to work either inside the game or in the launcher) the need for an actual game release goes down substantially. You just grab the base game client and pick your "mod pack" akin to FTB/Technic (gameplay template for us, really) and get the appropriate modules downloaded automatically.

Mainly we have the stable game release zips because we don't have the more granular way to auto-download the appropriate modules. And for any offline play situations, of course.

If anybody managed to read all the way through this mega-post attempting to show purpose in my madness give yourself a cookie. You deserve it! :coffee:
 

Florian

Active Member
Contributor
Architecture
@Cervator: I started reading and could not belief when I could scoll agan and again ^^.

About module dependencies: You can define a minVersion and a maxVersion. If a module works for 1.0.0 and 2.0.0 you would set minVerison to 1.0.0 and the exclusive maxVersion to 3.0.0. Or when we are at 0.x.x we would set the maxVersion to 0.3.0 when we our module works till 0.2.x.

About not needing terasology releases: When terasology can download modules itself from the artifactory I think we would still not just provide a list of modules that are suposed to work together but would propably have a "Stable Gamplay X" module that has very speicifc dependencies to gurantee that it works: e.g. minVerson=0.5.3 and maxVersion = 0.5.4

There could be of course also a "Development Preview Gemeplay" module that has no version limit for it's dependencies, so that players can try out the newest stuff if they dare.

@Cervator about his tools: It's cool tha you are working on such cool release mechanism, I am looking forward to it :D.

Do you have also something planed for the increment of the maxVersion field? e.g. when the engine got it's "api breaking" number incremented from 0.50.1 to 0.51.0, but most of the modules do still work and can set their exclusive maxVersion from 0.52.0 to 0.53.0.
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Thank you! *takes a bow*

The max version bit is tricky since we don't want to pollute GitHub with excessive version-tweak-only commits. If we truly follow SemVer (and pretend nobody ever violates its contract and puts breaking changes in a patch release) we should never actually need max version set anywhere. It is pretty much an extra that we could increment it anyway to cover compatible additional major releases for dependencies that don't break some things (module X works with its dependency on Y at min version 1.+.+ to max version 3.+.+)

On getting super stable game releases then yeah we might want to pin the versions somehow. Currently the Distros are simply put together as a list of modules with no version info at all, so they'll always create a game distribution with the latest of all included modules. That's another quick "Well, it works most the time!" hack just to get module distribution out of the engine build job.

With release management enabled that should get better just from ignoring snapshots, but would still happily pull in new major releases of stuff that would then break for dependent modules (or more correctly, you'd start getting red modules in the list indicating incompatibility - instead of crashing, yay!).

The next step might be to change the distro list to be more module-like, where you'd just request "ThroughoutTheAges:1.0.0,JoshariasSurvival:1.0.0" instead of every individual module listed, using the dependency resolution to get the right modules plus having the distro packaging fail if there are conflicts (new major releases available causing trouble, or different gameplay templates depending on different versions of a shared module). Then we can be more sure we are releasing a fully stable game package.

In the end Omega might become a short list of top-level modules pinned at their most recent major release compatible with the current engine version. By the next game release we'd consider if we need to change the pinned versions to bundle.

As for the numbers to actually use in a game release I don't know yet. It probably shouldn't be the engine version number. Maybe it could just be the job execution number from Jenkins, with some new neato script listing which version of engine + all modules + lists of changes for all that. There are bound to be some examples out there we can research for good practices.
 

Florian

Active Member
Contributor
Architecture
Even if we folloe SemVer, once the "api broke" version number gets incremented, all modules that used it should be seen as incompatible.

e.g. if a module depends on engine 3.4.0, and we release engine 4.0.0 then all modules that depended on 3.x.x ( maxVersion = 4.0.0) can't be used with the engine 4.0.0. And it is correct that they are seen incompatible as they may really be incompatible with the engine. So after the module got tested for compatiblity we need to make a patch level release of that module with an incremented maxVersion field.

If however an "API stayed compatible" version number got incremented then of course the maxVersion field of the depending modules does not need to be updated.
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Yeah I think we're pretty much in agreement already :)

We could do patch-level releases of dependent modules after testing, just to bump maxVersion. I just think that could become spammy with hundreds or even thousands of modules around later. So the module Index stuff I talked about above tries to mitigate that by adding smarts to help infer whether or not a module remains OK even without a version tweak. That's especially the case for modules we might not directly maintain ourselves.

But that's a way future item in any case :)
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
So it didn't take long to hit this topic again :)

https://github.com/MovingBlocks/Terasology/pull/1670

Even with proper use of SemVer (or extended proper use with pre-releases included) we'll end up blocking dependencies that are too new very easily.

I think we should make the game more lenient when dependencies are present but "too new" - include a disclaimer instead of a blocking error. Then over time like mentioned above (starting with "On successful compilation against ...") we can make more information available to let the player make a call whether to try something or not.

Actual missing dependencies or too low dependencies are different, of course. As is joining a server (in theory there you should just get whatever the server uses - unsure if the player would need any sort of disclaimer there)
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Thanks for the initiative and comments :)

I'm missing something subtle on that wiki page though, feels like I need a diff tool to compare between the two blocks. Might be from it being late here. Is there any difference other than option 1 suggesting increasing max version and option 2 just working with a more lenient engine / version scheme? I must be missing something.

To quickly address the snapshot and current process when incrementing: I left out "SNAPSHOT" in the module.txt just because it would pretty much always be the case outside of built artifacts - instead I just told Gradle to assume everything is a snapshot unless it is building the master branch in Jenkins. So that way it boils down to:
  • New minor release needed, example version in module.txt 0.54.1 (assumed snapshot)
  • Bump module.txt to 0.55.0
  • Push to master - let it build (will be a non-snapshot)
  • Bump module.txt to 0.55.1
  • Push to develop (snapshop build again)
What we want it to be is a simply parameter on a release job that'll do the version bumps and pushing for you.

On keeping modules easier to manage, yeah that's still tricky. I empathize on your hunt for a better way, like with Git submodules, but don't think they'll work in the engine repo. However - it got me thinking, and with some related thoughts from the RX14 fellow on IRC I wonder if we could use a separate repo purely for Git sub modules linking to specific Terasology modules.

Say we make a new repo called ModuleLineup. In this repo we configure Git sub modules for every module in the lineup, at their current head commit, after an Omega release. Tag that state with the same "stable55" as the engine repo. By the time we release "stable56" we update the sub module commit pointers to the versions we're releasing with the next Omega.

Now you can suddenly check out every module at the exact source level as of a particular Omega release, probably along with the paired engine release. Ask for engine release 55 + ML 55 == pure source workspace at that point in time with two actions.

I don't know if it would work or hit performance issues with that many sub modules. Also am not sure how we might integrate that into a standard workspace. Somehow check out the ML repo into /modules ? Would you then only be able to have normal git cloned modules or the big monster sub module repo?

I dunno, this might just be a bad idea. Need to sleep on it :) Then there's also Git subtree merging, but ...

@msteiger: ping for great justice! Since we're talking about creative ways to use Git sub modules. Like I mentioned on IRC earlier the option to embed a GitHub wiki repo as a sub module /doc in a main repo is also somewhat intriguing, but likely a bad idea in a "main" repo. Monster ML repo with nested doc wiki sub modules? Woo. Crazy like a fox. Yeah I should go to bed.


.. one more thing! I wonder about using milestones for the engine repo. It works so nicely for the launcher. I tried making a "!NEXT: v0.54.0" milestone ("!NEXT" so it'll sort at the top in some places, and v0.54.0 is the next minor release) and assigned the big Gestalt PR to it. Sort of makes sense. Then added the world preview PR to it as well, because that's going on the next release for sure (already merged to develop). That doesn't work so well - because likely the Gestalt PR won't be merged until we do another release (as a checkpoint) first. So the preview PR would end up in v0.53.3 not v0.54.0

!CURRENT for stuff already merged, !NEXT for next minor? But then if something current is merged that needs a minor bump they'd overlap, argh. Yeah, bedtime, don't think too heavily about my ramblings, will refine tomorrow.
 
Last edited:

msteiger

Active Member
Contributor
World
Architecture
Logistics
First, I apologize for not having read the entire conversation yet.

I don't like writing version numbers (in particular not snapshot version numbers) in properties files, because it requires adding extra commits for the release and I need to come up with an idea whether the next version will be major or minor (e.g. is it 0.11.3-SNAPSHOT or 0.12.0-SNAPSHOT ?).

So here's what I do for WorldViewer:
  • Use git tags to annotate release versions in the form major.minor.0
  • Use git describe to find the number of commits since the last tag
The result is major.minor.number-of-commits.
That info is stored along with several other details (curent commit SHA, date, etc) in a Java source file, so it doesn't need to be parsed.

What does that buy me?
What it does not:
  • Deal with SNAPSHOT versions at all
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
I remember the Git commit count based approach and like it quite a bit in theory. Snapshots could be handled the same way - assume snapshot unless building a master branch in Jenkins. You could write the version info at build time and ship with the binaries.

However the main spot I see a problem is when you're trying to use a source module - no versioning :(

Git isn't reliable in the source scenario, especially since somebody could've just grabbed a source zip from GitHub rather than having a local Git repo set up. It seems like we still need a file of some sort. But we certainly could fully automate it one way or another. I don't remember if we came up with any other cons last time, but I'm sure the notes are around here somewhere.
 

prestidigitator

New Member
Contributor
Would it be possible to include another component in the version ID? 0.x.y.z would allow the full use of semantic versioning without moving to "version 1". (The three-component format could be interpreted—for now—by translating 0.x.y to 0.0.x.y)
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
For the time being we're just doing 0.x.y where x has the behavior of a major release and y the behavior of a minor release. Before 1.0.0 everything is in flux and changing all the time anyway so there isn't really a need to consider patch-only changes yet. Everything that doesn't break major things just increments the y :)
 

Florian

Active Member
Contributor
Architecture
@msteiger If it would be just about the versioning of the final product, then I would totally agree with your suggestion. However it is about the versioning of stuff that itself gets referenced as dependency.

What I like about your suggestion is that it would ensure that we get git tags. However there are also other means how we can archive that.

For example could we have a jenkins taks "build minor" that creates a tag from the specified version (without SNAPSHOT) and automatically increments the patch level version number afterwads and adds a -SNAPSHOT suffix.
If someone looks at the version number "0.53.2-SNAPSHOT" he will know that nothing big happend since 0.53 except for a patch level release 0.53.1. You get all advantages you listed with that variant too (except for the automatic version number for each commit which doesn't work out anyway - see point 4 below; and I see no use case for it). If the creation of the release commits is automated then making them is not an issue.

The disadvantage of that appraoch with snapshots is that we weould even have 2 commits per release...

Against deviating the version number only from git tags speaks:
1. Some modules don't need to compile but can just be packed. Without a build they won't get any version information.
2. Source can be obtained via other means, I think a version number is an imporant bit of information that should be contained in any source bundle even if you just downlaod a zip archive snapshot from github.
3. We would make the ability to build the source dependent on having the source be under version control by git. While I like git and think we will use it for a very long time, I think the same was thought when SVN was popular.
4. Having the last version number be commits since last tag only works if you are looking at a singular branch head. You can't however determine that way a version number of a aribitary commit from the git commit history as it is not linear.

I am however also happy with just git tags as versioning if the others would prefer that too.

If we use git tags as only version numbers I would suggest that we name the tags 1:1 like the version number, and have snapshot builds be called x.y.z-<first 8 hex digits of git hash that is usually unique>
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
In my experience with Maven projects (which always have a version in their pom, and thus the source), the ideal process is:
  • The version in the source repository will always be a snapshot version (except release tags/branches). The source code always represents a potential future release.
  • The release process should automatically set the version to a designated version to build the release artifacts (and create any release tags/branches), and then set it to the designated post-release snapshot version and commit back to the repository.
  • The post-release snapshot version is probably the next patch version unless you are sure at that point the next release will be a minor or major increment. You can optionally update the snapshot version to the next minor or major at the point something will cause that level of change, or otherwise leave the version number alone until release (I've told my team at work to let the release process handle all version changes, less confusing that way).
I'm not very enthusiastic about the idea of using git tags to mark the version- it actually seems like more work since I'm not aware of a release plugin for Jenkins that will apply the next snapshot tag back to the repository. Maybe something exists for that? It just feels somewhat intangible to me - and it would also mean that the source cannot be properly built outside of a git environment (the gradle build will need to obtain the tag). I have been using git tags to mark the points at which versions were built (via the Jenkins release plugin again) - this works well.

I would also say that a snapshot artifacts version should be just x.y.z-SNAPSHOT. Artifactory takes care of applying timestamps when these artifacts are uploaded. Feel free to embed extra data in the manifest or similar though.
 
Top