The goat sees GIMP 2.7.5 release, invades

The goat sees GIMP 2.7.5 release, invades

The GIMP team announced release of v2.7.5 yesterday, while the masses actually expected 2.8, or at least a release candidate. The pain of undelivery, however, is likely to be subdued with some interesting development going on in parallel.

But first things first.

What's in 2.7.5?

Despite of previously announced feature freeze few changes have managed to sneak in, barricade the door and take hostages. Direct opening of PS, EPS and PDF files is now done via Ghostscript's library, not the executable file, which means no more temporary files. Also, the default quick mask color is now configurable (Default Image section of the Preferences dialog).

A much more noticeable change is the new bundle of GIMP brushes and tool presets which LGW already wrote about in details a while ago.

Brushes and presets

This is the beginning of a major revamp of the resources that GIMP ships by default. The new brushes and presets provide a much better out-of-box experience for digital painting. The work was done by Ramon Miranda (GIMP Paint Studio) and Guillermo Espertino.

As usual, bugs were fixed and translations were updated. There are still some bugs in the tracker that block the release of v2.8. Fixing them will take a while, unless the team decides to skip it for now.

Both the source code and the Windows installer are available for downloading.

What's up with that goat?

Genetically Engineered Goat, Large aka GEGL is the GIMP's new image processing core that has all the buzzwords it can eat. It takes CIE LAB, puts some EXR on top, sprinkles it with 32bit float per channel precision, composites it all with nodes and gobbles down via mipmaps really fast.

The plan to finalize transition of GIMP to using GEGL for everything has been postponed for ages. Right after branching 2.7.5 the team created a branch called goat-invasion where they are making GIMP use GEGL, for real. According to the current plans, this branch will become v2.10.

What happens when the goat invades?


So, the usual thing, really. As demonstrated by rore.

The team has already replaced GIMP's tile manager with GEGL code and wiped out legacy color adjustment operations replacing them with GEGL operations. Right now color ops still use 8bit precision, and it's too early to say if higher bit depth is going feature in 2.10 (or 3.0, whichever version is next).

The new branch compiles and works, but there are all kinds of little annoying things like no undo for color tools and broken duplication of layers. This is, of course, only to be expected for a subproject that started mere 48 hours ago.

Er... 2.8? 2.10? 3.0? What the hell?

It's a bit of a puzzler indeed. But here's the basic idea.

When the last bugs are fixed and release notes are finished, the team releases v2.8. Nobody knows when.

Meanwhile work on GTK+3 port and GEGL invasion will continue and is likely to become v2.10. That version will not necessarily have high bit depth precision available, but it will be using GEGL natively anyway. In fact, it already does.

Once again, we are talking about a very early stage of a project. Everything is possible. By joining the project and helping out you will make this possibility a reality.

Was it useful? There's more:

26 Responses. Comments closed for this entry.

  1. Still no 2.8??  I think Gimp really need a change in his management. All this time without a real release is ruining the credibility (if still existing) this app has.

    Maybe Gimp needs a Ton Roosendaal to get the project back on track.

  2. Alexandre Prokoudine 15 March 2012 at 11:01 am

    A change is planned: branches for new features for a manageable release process and shorter release cycles.

  3. I think GIMP management already done great jobs. Can’t wait for 2.8.

  4. Very interesting news :)

  5. @1ko: agree that Blender developpment is the model for all Open source project.

  6. Cool , I see also ‘Chaos&Evolutions;’ brush are inside too :) Good job for the default preset Ramon.

  7. Kevin Brubeck Unhammer 15 March 2012 at 3:05 pm

    The longer you wait, the more you feel like “people have waited so long, we must get it bug-free before release”. Bad cycle –  software is never bug-free. Hope they get out of it soon :)

  8. Everyone planning on Ubuntu 12.04 are going to be very disappointed that Gimp 2.8 won’t be there. Darktable 1.0 will be there, since today it was just released. We really need Corel to port Paintshop Pro to Linux. Desperately!. Until Gimp is turned into a foundation funded by major players dreams will be dreams!

  9. very interesting news! thank you for keeping us up 2 date with the development!

    if GIMP nativley uses gegl from 2.10+/3 or whatever, is it a huge piece of work to get 32bit/channel into GIMP? I mean if everything is then working on base of GEGL is there still much work to be done by a developer to create that possibility?

  10. Alexandre Prokoudine 16 March 2012 at 9:43 am

    @devvv I don’t think it has been estimated. We’ll have to wait and see :)

  11. I’ve found that blender makes development really easy. For instance, hovering over any button reveals the object that you need to manipulate. Perhaps this makes a barrier to entry much less…idk.

    One advantage Blender has over Gimp is its learning curve. Blender is decidedly more difficult to learn. This may seem like a disadvantage, however it means that people hang around forums and do more tutorials to become proficient with it. When I learned how to use both, I found a huge difference in the size of the communities. I’d assume that active community members are more likely to eventually participate in development…additionally, I’ve actually seen efforts by blender community members to teach each other basic programming for the purpose of recruiting future developers.

    Anyways, I’m curious about the use of floating point numbers for channel depth. I’ve noticed this practice in audio sampling as well. Is the concept that the imprecision from a float is negligible, so it is used over a long integer? Just wondering.

  12. Stefan: I can at least speak for the audio world, and there the imprecision is neglibible compared to the gain of comfort.

    Imagine a digital audio workstation like Ardour were you probably have a lot of channels each holding a few effects. Without float processing you have to take care that your signal does never(!) clip in every processing stage. Inside the effect chain you normally have no metering and by that it’s barely impossible. By using float you only have to take care that the last stage before the signal is routed to the soundcard and DA converter is in the proper range.

    One could also say, that rounding errors don’t sound bad but digital clipping is hell ;)

  13. Stefan, there’s no imprecision in using a 32-bit float for audio samples or for per-channel color data. A 32-bit floating point number has a 24-bit significand and therefore can exactly reproduce integer/fixed-point data of up to 24 bits. 16-bit audio is perfectly fine as a final mastering format, and humans can’t perceive the difference between a higher bit depth master and a properly done 16-bit version in blind tests. The audio production process requires more precision headroom, but 20 bits would be enough for most applications and real-world ADC/DAC setups can’t do better than about 20 bits of precision. For various reasons, we’ve all settled on 24-bit audio for production; there never will be need for more precision than that.

    Similar considerations apply for 16 bit color channel depth; I see no evidence that variations in color requiring more than 10 bits per channel are perceptible, and real-world capture equipment mostly maxes out at 12bpc. 16bpc gives us more than plenty of headroom to play with curves, levels, etc without running into perceptible precision losses.

    So 32-bit floats actually offer more precision than either audio or a single color channel will ever need. Why use them instead of 24-bit audio or 16-bit color channels? Mostly for computing convenience. Computers are very quick with 32-bit floats, and using a floating-point format with more than enough precision makes a lot of things simpler when coding various kinds of algorithms.

  14. Looks like Photoshop CS 6 is also getting a UI redesign.

    And it looks way better and more professional than Gimp’s new thrown together design! Gimp devs up to the challenge??. Can you do better, and make Linux users proud?!.

  15. Alexandre Prokoudine 23 March 2012 at 7:24 am

    Photoshop’s UI never stopped changing :) They just changed more than usual this time :)

  16. Photoshop CS 6 gives you the option of a Black, Dark, Medium Grey, or Light Grey colored user interface. “BY DEFAULT”!. No need to have to add themes. Raw Therapee has this feature as well. So why does Gimp Single Window Mode,“BY DEFAULT” has only options for icon sizes. With same grey colored user interface, instead of a dark grey, medium grey, color. Like most pixel, and raw editors have this day. Without having to change the whole GTK desktop theme in Linux. CS 6 also has way nicer fonts. With all the great open-source fonts, why does Gimp fonts look terrible. I’m no Adobe fan, but they have done a way better job in designing a modern user interface than Gimp ever has. With the way you created this beautiful website. It would be nice to put this same talent into desining a profesional user interface for that RIVALS proprietary professional apps. My rant!.

  17. Alexandre, look at the user interfaces of apps like Corel PSP X4, DXO, Capture One, Capture NX2, etcc. Some beautiful user interfaces for these “commercial” photo apps. It seems to me that it took Gimp devs way longer than it takes these companies to put together a nice and professional looking user interface. Take Gimp 2.6 maximize the main panel and add the toolbars to eachside, dock them, and that is all that single window mode looks like. Nothing new. Same icons, same color, just docked toolbars to the main screen, with a tabs added.

  18. Alexandre Prokoudine 23 March 2012 at 8:17 am

    As for colors and icons, I think we are just waiting for the GTK+3 port. IIRC, Jimmac had some plans regarding styling of GIMP (so much easier with GTK+3 and CSS), and he actually started experimenting with new stencil-style icons (you can find it in gnome-design Git repo).

  19. Thanks Dan and Sebastian.

    Though I do have a question (I can’t find info on Google).

    Isn’t the exponent in the float overhead? If you are only using the manissa’s maximum value (thereby changing the exponent along the way) to represent values, what does the exponent acomplish? Additionally, I’ve been trained that integer arithmetic is significantly faster than floating point.

    Dan breifly addressed this, but I am a little confused. It has been a long time since I’be thought about the binary representation of a float, so forgive me if I went wrong somewhere.

  20. When would you be just using the max value for the significand? That’s bizarre.

    In a final export format, the exponent would be needless overhead (well, unless you’re using the float16 HDR formats). But it’s extremely useful in intermediate computations because you keep full precision.

    For instance, if you’re using 16-bit signed integer arithmetic, the only numbers with 15 significant binary digits are -32768, -32767, and 32767; any results between -32767 and 32767 have less precision (32767/32767 = 1 = 32767/16384), and any results outside of the range -32768 to 32767 simply get “clipped” back to those limits as Sebastian alluded to (see Wikipedia on saturation arithmetic for detail).

    With floating point numbers, a very vast range of numbers (for 32-bit, positive numbers from 2^-126 to slightly under 2^128) will have full precision, and tiny numbers too small to have full precision (called denormals - for 32-bit, 2^-149 < x < 2^-126) will have gradually degrading precision. The extra precision is extremely useful for intermediate computations.

    Floating-point computations aren’t significantly slower than integer these days. Some microbenchmarks will favor integer, some will favor floating point, but the small difference between the two will be totally swamped by things that make more of a performance difference like cache considerations and branch prediction.

  21. Thanks Dan. My apologies on the miscommunication, it is a typo. I was on my android.

  22. No problem; I had thought perhaps you might not have meant mantissa there.

    BTW the qualifier “these days” in my statement about int vs fp performance may explain why you’d been trained otherwise. Way back in the day fp really was much slower than int, and the difference between memory speeds and cpu speeds was much smaller so cache worries were much less of an issue.

    That changed gradually over the years, and by the time of the original Athlon and then the introduction of SSE2 (1999, 2001) the perf difference was no longer really an issue for desktops&laptops;. Even though it hasn’t been an issue for over a decade, people may still have been teaching each other the “received wisdom” from the bad old days years after the situation changed.

  23. Would be cool to see an online course “Get into GIMP development” here:
    Just trying to think of ways to increase the number of developers!

  24. Gimp Foundation badly needed!. Here is a good reason. 

    Gimp and Astro Imaging from Chandra web site.

    The industry standard software for processing & imaging work of this nature is Adobe Photoshop. Photoshop comes in a variety of “flavors” priced according to the needs of the user and is available for both Windows and Mac (unfortunately, not Linux). While we would like to maintain an open source workflow in the openFITS project, there are certain advantages to using Photoshop and the FITSLiberator plugin that will make this difficult. For the first tutorial, and anywhere else that is applicable, the Gnu Image Manipulation Program (GIMP) will be used as this is completely open source, available for all platforms, and will be suitable for the more straightforward images. Although GIMP is perfectly capable of reading FITS files, it is extremely limited in its control over image scaling. As the difficulty level of the images increases, this limitation will force the use of Photoshop with the FITSLiberator plugin developed in conjunction with ESA/ESO and NASA. This plugin gives the user complete control over the appearance of data before it is projected to the screen and most of the scaling information is lost.

    This is a good reason for the need to have a foundation with many contributors and support, like Libre-Office has. Professional Astronomers use Linux platforms allot, why this limitation in Gimp. Unacceptable!!.

  25. I like it when folks get together and share thoughts.
    Great blog, stick with it!