Back to gradually making progress on the editor for Yin, again. I mean, it looks like a thing now I guess?

Because of the way it all needs to work I've been overhauling the way viewports/windows were handled throughout the code, which uh, has had a bit of a domino effect on a lot of systems. 

There is a bit of a chicken-and-egg problem that had been avoided for a long time which needs addressing too - for the SDL2 shell, the window was originally created by the engine via an "OS_CreateWindow" function during initialization; the engine would determine what graphics APIs were available and pick the one most appropriate, then hint to the shell the type of window it wanted to create, but the editor worked completely the opposite way for obvious reasons (the "OS_CreateWindow" function would just do nothing in that case, as the OpenGL context and window was created for us - ew, I know!)

It does make more sense for the shell to determine this rather than the engine, certainly, and the plan now is for the SDL2 shell to create the window itself just as the editor did, but this has raised some other concerns, like, do I now move all the graphics API detection out into the shell? Because it also needs to lookup the user preference too, and all of the configuration state is managed and tracked by the engine - and the engine can't currently be initialised until the window is created.

Damn my brain is useless...

The solution might be to move the configuration state manager out into the "common" library.


So the plan to merge Compton and Yin into one engine didn't quite pan out. It's still a long-term goal for me, but Compton has way too much put into it at this stage to just merge and ditch; it's likely months of work to combine the two feature-wise.

The Compton engine itself is still very much in mind for a very particular game I've wanted to make for a long long time too, hence the hesitation, so this is pretty much a matter for me to buckle down and finish that damn game at this stage. Though it's not like Compton doesn't have it's own challenges. Sigh.

Unsurprisingly my plan to prioritise also didn't pan out... I've wasted a good amount of time farting around as usual with a number of different projects. Really need to experiment and learn better ways to focus myself when it comes to these things. I'm at least very concious that Compton and Yin are my two primary focuses, but now and then something can distract me for a good few weeks for something that's really too ambitious for my own good - I'll get frustrated, and then rinse and repeat.

I'd also made Yin public for a brief while on GitHub, but it made me feel a little anxious about touching it knowing there could be eyes on everything I'd be doing. For now I'll likely continue with the routine of making whatever with the engine and then releasing that copy of the source code once I've progressed further along. That said, most future publications will likely be under LGPLv3 rather than public domain as I'd done in the past.

Actually I'd forgotten to mention before but Compton isn't public on GitHub either anymore, though in that case it's more so that I don't want anyone playing the game I'm working on there before it's ready (whereas in the case of Yin, there's currently no game.)

Worth outlining that I'm not bigoted enough to think that Yin as an engine is useful to anyone but me, but I'd hope that some things in there could at least be somewhat informative or educational, either as a do or don't do this sort of thing. Probably more of the latter.

With that out the way however, here is a list of things that I've done for Yin since the last post (pretty much just had a quick nose through the significant bits in the commit history, given I don't tend to keep the changelog especially up-to-date.)

  • Began migrating what exists of the editor into the engine
  • Made more progress on the AST for Dickens scripting language, though there's still a long way to go
  • Implemented a preview state for materials, so you can see how it may look without having to necessarily load the whole material into memory; used for the material browser
  • Implemented a first pass of the material browser
  • Created new icons for the editor modes
  • Pulled over the binary serialisation system from Compton
  • Made a minor syntax change to the Node format; 'integer' type is now just 'int', for consistency
  • Experimented with creating a face-inspector for the World Editor
  • Updated the memory manager so for cached objects we now store a hashed name as the id, store a descriptor for debugging and use the hashed name for the lookup
  • Rewrote the lighting shader
  • Overhauled the profiling overlay
  • Updated the Blender plugin to support the latest format changes
  • Introduced a file aliasing system, essentially allow you to override a file I/O request with a different file
  • Default mount locations are now provided by a Node file rather than being hard-coded as before
  • Added support for bloom
  • Mouse look (finally)
  • Gave the client-side input code a bit more TLC
  • Fleshed out client state management a bit more, with query for connection with server-side etc.
  • Reintroduced fog which can now be controlled via the world properties
  • Started hooking up player movement again
  • Face normals are now generated
  • Started implementing visible face determination and portal traversal when drawing the scene
  • Started experimenting with mirrors
  • Started implementing Lisp interpreter as an interim solution until Dickens can replace it (or they'll more likely live side-by-side)

Most of the significant work lately has been towards the World stuff, which makes sense I guess since it's sort of the meat and bones. The engine can now pretty much go from sector to sector and collect a list of visible faces, then split that between portals or just general faces, and then we could pretty much now just keep rendering the same sector over and over from each portal with all the actors correctly transformed over.

Downside is that there's still lots and lots and lots of work still to do, especially on the rendering side.

Lighting is an obstacle that can technically now be overcome (right now light sources are hard-coded and then passed through the material system as if they'd been picked up for that particular surface) but it's still a question for me as to how lights are generally best implemented. Long long ago Yin was originally envisioned as a scene graph oriented renderer and the API for this still actually exists in the engine, but as I experimented and prototyped I'd somehow just drifted away from it, but to get to where I need to be it really would make sense to go back to that design. Point is I'd like to do this before fleshing out the lighting system more.

This video shows some experiments with updating the lighting, as it didn't really behave correctly before. There's support for specular and normal maps, however they're not demonstrated in the video below (fairly sure I demonstrated them in the past at least.)

The screenshot below shows the initial pass of the material browser which is scalable too.


The eventual plan is to have it so that clicking onto (or hovering over) the material might give a (probably optional) live preview, so you can see how it immediately looks under lit conditions or with animations playing.

This final screenshot shows a quick little test map I whipped up with the Blender plugin; you can pretty much mesh out whatever geometry and just export it for the engine as a "World Mesh", which can then be used as the body of a sector. The gateway currently features a portal surface which is just being used as a mirror, though there are some problems I need to solve with this at some stage (not to mention mirrors are a bit of a stretch goal).



Recently decided to try to discipline myself a bit more. Too many projects on the go at once, which honestly has been doing nothing good for my mental health. I've decided to abandon some other projects that had been adopted on the side and merge Compton, my 2D engine, into Yin.

This means that going forward Yin will have functionality for supporting both 2D and 3D games. This also means that Compton is being retired proper now. But going forward I'll hopefully only have Hosae and Yin in focus (and whatever game I build on top of Yin.)

Over the last day I've been gradually working on incorporating functionality from Compton into Yin, particularly starting off with the GUI system.

While migrating the GUI system, and rewriting it in the process, I've opted to actually incorporate it as a separate library instead of integrating it directly into the engine, which is an approach I'd like to use for other major additions going forward.

Currently trying out wxWidgets for the editor frontend, though my motivation has been all over the place for that; I don't like wxWidgets but at least it's a little more actively maintained and supported than the Fox Toolkit I was using before (though I'd say Fox is still significantly nicer in a lot of respects.)

Yin's scripting language is now called Dickens.


Did a tiny tiny bit of work on the 3D engine yesterday after a long pause, which basically involved submitting defeat and starting an overhaul of my Blender plugin. I guess I'll be making levels via Blender. For now.

Plan is to make another game this year (or next) to progress the engine tech a bit more. Basic design is in the works.

2D engine now has a functioning animation system; script for pulling groups of sprites from an image and then another script for outlining the individual animations, their speed, what frames, whether to loop or flip and more. Good stuff!

Dan suggested an editor. The thought makes me cringe; I've been slightly fed up of working with various UI frameworks. But then it hit me while I was in the middle of something else today that my 2D engine has a pretty simplistic but functional UI system that I've been looking for an excuse to flesh out a bit more, and that would mean the editor could be integrated within the engine. That means less work... Hopefully!

Didn't make much progress towards that today however. Considering how to do it. Probably just a pre-processor flag that denotes whether to compile as the editor? Handling this all at runtime always ends up a bit of a mess, at least with how I've handled it before. Do you do it via a keyboard shortcut? Should I integrate a console to allow the user to trigger a command (and flesh that out going forward)? Or should it just be a stand-alone application?

Thinking more and more about silly complex things, and really need to avoid that, just need to prioritise actually making progress!

Oh, and came up with a possible name for the game the other day. Dulce Somnii. It roughly translates to 'sweet dreams' in Latin. I've had it as my status on Discord for a while, but it might make sense for the game in mind, certainly more interesting than the other name that was floating around my brain.


So my Anachronox re-implementation now compiles on both x86 and x64 architectures - and on Linux, finally. SDL2 implementation is done, though introduced a bug with movement speed... Rest of simulation seems fine, just movement is slow? Stupid shitty lockstep crap. Will look into it later given we'll be rewriting the movement logic anyway.

Apparently someone got it running on an ARM device as well? I do have an Raspberry Pi here (actually 3/4 of them), but had not really considered doing this yet as I'd like to rewrite parts of the renderer first.

Things on Yin have been busy busy busy. There's been more work on the client/server architecture, entity components, input, audio and the material system. I've been bouncing around a lot. 

Entity components are mostly done, but not yet had a reason to migrate over from the Actor system just yet, and there are some questions about how things will work from a user perspective (currently investigating entity templates outlined via node files). I've not yet considered how networking is going to work in relation to the components though...

New input stuff is mostly done, but again, not had a reason to totally migrate to it just yet, but you can outline a collection of actions with associated keys and the rest of the logic will automatically look out for any key activating said action before using a callback to do whatever is required.

For audio, there's now the AudioDriverInterface, which functions as an abstract interface towards whatever audio system is available on the given platform - in the case of Windows I've been working on supporting XAudio2 and on unix platforms I'll probably use OpenAL Soft instead. This isn't high up my list but I felt it was a good idea to get the ground work in place for doing something more substantial than what we had before, i.e. using SDL2 Mixer.

This was something that was originally envisioned for Hei but I'm on the fence about it for now, but if we have an abstraction layer for graphics in there it would probably make sense to consider one for audio, particularly as there's been a large explosion of audio APIs lately... Perhaps I'll migrate it down the line.

There wasn't really anything that substantial changed for the material system, however the post-processing is now handled through the material system finally, which was very much influenced by a bug I was trying to tackle. This did require changing how we handled the render target built-in variable, as previously this just threw over the render target used for the menu as a bodge job introduced for the game jam. The way in which we hand over the viewport size has been changed a little bit as well for the material system, but nothing too interesting to say about it.

Generally though, it's more flexible now. It also gave me an opportunity to experiment a little bit more with other effects and I think it might be likely I'll be looking at adding effects such as bloom, depth of field and more relatively soon perhaps.

Otherwise, I've been working on a tool to convert maps produced by Hammer/Jackhammer/Trench into geometry that Yin can use and been making pretty good progress. Been lacking motivation but I really hope to have it finished very soon, the only thing left to do is serialization and then we should be back to having proper environments again!

Added support for QOI to Hei before I went to bed today too. It apparently features compression close to PNG, but with a faster decoder, so that's pretty neat. It actually reminded me of arguments I'd had on another project about the speed of decoding a PNG though - dumbest argument I've ever had given it was a retro-style game with pretty small textures, so the impact was absolutely negligible, and if anything contributed to the load time it was our crappy material system we had at the time (which I take full responsibility for.) Anyway, that's all a story for another time.