Thread: gui in c

  1. #1
    Registered User
    Join Date
    Apr 2019
    Posts
    662

    gui in c

    is it possible to write gui apps in c how does one go about it

  2. #2
    Informer -Adrian's Avatar
    Join Date
    Jan 2013
    Posts
    811
    You could consider using GTK, which is written in C.

  3. #3
    Registered User
    Join Date
    May 2019
    Posts
    209
    If you've ever seen a C GUI application, you'll notice there's lots of detail. It is possible to simplify that detail into a collection of functions which help, but the result is very difficult to manage in a platform independent fashion. MAC, for example, has a GUI system based on Objective-C, which has it's own means of managing memory. C and Objective-C (or C++) can mix freely, so the Objective-C code can be accessed or call C/C++ code, but in that design there's a bit much to wrap the Objective-C API in C to produce a platform independent interface. Linux and UNIX using the same basic GUI (whichever is chosen by the user) is generally portable to each other, and while GNome differs from KDE, and various forks of each exist, they are at least on the same basic design. Window, on the other hand, while still a C oriented interface, is it's own beast.

    In other words, from a C perspective I've only ever seen platform dependent development.

    In the 21st century that makes no sense. If one's purpose is to understand these older style interfaces to GUI, it is an interesting study, but the shear amount of work to produce a simple dialog is absurd give the result.

    I haven't even detailed what custom controls require. At one level of GUI development the target is what I call a "bag of fields" application. There will be buttons, sliders, edit controls, list boxes, all presenting and accepting information. These are relatively trivial. At the next level are custom controls, where the standard list boxes or tree controls aren't sufficient to convey the information, so the programmer takes control of a more raw level to draw information, or icons or images. The next level is a fully custom display, something like a drawing program. Here there is virtually nothing of the standard OS controls, and the programmer is taking in raw input messages, drawing everything the user sees, providing all actions from scrolling or panning to zooming or swiping.

    The most basic point of GUI isn't really the graphics, its the event system. One might say the program is written inside out. In simple console applications the execution begins in main, winds through functions, might stop at menus for selections, performs actions, ultimately exiting. There's a single path through everything, and the flow is under the control of the programmer.

    In an event driven system, which can be implemented in text mode, the path through the execution of the program is up to the user. The application launches, displaying what options are available through the metaphor of buttons, menus, toolbars, edit controls, etc...but the user decides what happens. The application basically enters a "cooperative" animation loop, waiting for an instruction.

    This means that any process might be selected. The programmer isn't defining a strict path of execution. They're may be layers which are disabled and enabled based on context, but within any given context the user may select all possible options. For example, in a vector drawing program (sketching in line art), the user may have selected a drawing tool (a pen). The pen may be placed for stroke anywhere on the page and drawing commences. The "mode" of operation switches from hovering of the pen (not drawing) to pen down (drawing), and then after drawing whatever the user intended, the mode is returned to hovering. If, however, the user selects one of the elements in the drawing, the mode may switch to that of selection - at which point there is no pen, this mode probably moves (or deletes) the selected element.

    The overall application in GUI is generally modeless, meaning that most any option is available. Modal dialogs may appear for some situations which halt all other options until the dialog is dismissed. Modes are key to GUI.

    Consider, for example, selection with the mouse. The mouse hovers, which we'll call a default state. For this, the application receives hundreds of mouse move messages per second, with coordinates for the location. This animates the cursor, it also may initiate a search for whatever object is under the mouse. As the mouse approaches a potential target, hover may switch to a "candidate found" state, where the cursor changes shape to indicate, say, an object might be selectible. If the mouse continues moving it may leave that region, returning to a default hover.

    This "in" or "out" of a region is key. It defines particular paradigm. When the mouse moves into a region for the first time, it must capture the mouse messages so they are exclusive, temporarily, to the "inside hover" logic. When the mouse first leaves the region, this capture is released while, at the same time, the cursor is returned to the default state.

    Beginners find this mind boggling at first, like recursion or the address of a pointer.

    If a selection is made, a mouse capture is performed, and the application enters a "drag state". Here, the selected object is probably moved (dragged) until a button is released (usually). This initiates a drop state. In the drop state various checks are made that the drop makes sense - there can be a "nested drag/hover" logic required to change the mouse cursor to show where drops are ok (or not), and finally when the drag is finished, the mouse capture is released, and the cursor returned to the default.

    All of this is quite manual in C.

    For the 21st century, it makes no sense to do this in a platform dependent fashion. When there was no alternative, there were no platform independent applications (pre-90's).

    Now, the proper response a C++ developer has when just looking at the GUI interface for any operating system is to encapsulate. That produces something like MFC, the Microsoft framework devoted to Windows in C++. The next most appropriate reaction is that this makes no sense, so a new framework is required that is independent. There are two robust options now, but there were dozens at one point in the late 90's, most of the terrible.

    Qt is popular. I'm not a fan as it attempts to do way too much by itself. Yet, it is robust, and a large number of professional applications are built with it.
    WxWidgets is a free, open source framework in use for many decades. It is robust, powerful, a tad older style than C++17, but a number of professional applications are built with it.

    I don't know of others that survived the test of time.

    If you were to make games, you'd choose a game engine. There are just too many API's and graphics chips to make a game which directly uses an API like DirectX or Vulkan.

    If you make GUI applications for the desktop, you choose a framework for the same reason.

    If you're going mobile, WxWidgets isn't the better option, but then Qt might be a squeeze. Mobile is still "new" in this way. The display layout is tiny, usually portrait orientation, and the touch interface is quite different (though related) to mouse usage.

    Some developers choose to use some form of HTML for interfaces. That is, instead of writing a GUI application in C++, they make a Javascript scripted application which plays inside an HTML control. This naturally mutates between mobile and desktop, providing a web-like/software related user interface.

    Then, there are other somewhat related notions of scripted interfaces dependent upon other scripting/markup languages, all using the same notion of a scripted user interface "performed" or "rendered" in a common interface code adaptable to various devices.

    For some applications it may be a good option. Don't expect a CAD system to be built this way, but "bag of fields" with icons...something you might also see on a website - it works.

    Indeed, at this point, user interfaces have mutated into web-like interfaces on desktop/GUI, or GUI like interfaces on web pages.

    My own suggestion is that GUI application work is not well suited toward C. I would only advise the curious to dive in there. It is a worthy study, sure - you may well need to know how the OS does this stuff. If you want to be productive, choose a framework. If you're going to make portable designs for mobile AND desktop, that's still a moving target - an unpleasant mashup of various techniques usually involving scripting the interface.

Popular pages Recent additions subscribe to a feed

Tags for this Thread