Like Tree10Likes
  • 1 Post By phantomotap
  • 1 Post By manasij7479
  • 1 Post By Elkvis
  • 2 Post By Mario F.
  • 5 Post By Hodor

Source-based distros are awful (tipsy blog post)

This is a discussion on Source-based distros are awful (tipsy blog post) within the General Discussions forums, part of the Community Boards category; Absolutely awful. The worst things to have ever happened to Linux since... Uh... I don't know, they're just awful. I'm ...

  1. #1
    Registered User MutantJohn's Avatar
    Join Date
    Feb 2013
    Posts
    1,279

    Source-based distros are awful (tipsy blog post)

    Absolutely awful. The worst things to have ever happened to Linux since... Uh... I don't know, they're just awful.

    I'm trying to get some code working on my university's supercomputer. It all sounds so perfect in theory but holy poop is it absolutely awful using this thing in practice because I don't have root privilege. Granted, CentOS does seem to have a package manger but I can't use it as I'm not a sudoer T_T

    I have some basic requirements for my code. Lambda functions and the boost library with the boost::threadpool on top.

    So I check the version of g++ and it says the native one is 4.4 something. Turns out, no lambda support for that. It's only for 4.5 and above.

    As a matter of fun fact, it takes literally an hour to install gcc from source. Literally. Sure, it configures quickly but my God, I forgot to mention the stub-32.h error. Yeah, no 32 bit support for this compiler 'cause I had to install it with --disable-multilib. So once I finally got past that awful error, it took like 45 minutes+ to 'make' the code.

    'make install' was surprisingly quick.

    So good, I have g++ 4.8.2 working and I have lambdas now. Great.

    Next is boost. I download it and it seems to install fine. It looks like it's using the base g++ which is version 4.4 but we'll see if that's an issue.

    So I try to install the threadpool but lo and behold, I need doxygen. So I try to install doxygen and it gives me errors that defintions aren't matching.

    All I can think is, forget this. I don't need this in my life. 32 concurrent threads? Pfft, for amateurs. My quadcore runs my code fine.

    It's so hard for me not to email my old professors who manage this computer. Is it rude to send an email going, "Hey, would you like to make me a sudoer or perhaps use your privileges to for one, update this old compiler, two, install useful libraries and then three, run this code I wrote for 32 threads?"

    I'm sooooo tempted but I know I shouldn't but what're they gonna do? Kick me off? Good. I don't need this much nerd rage in my life anyway.

    Alright, I'm going to bed, just felt like blabbing about how dumb using a computer can be sometimes and this is the only place I could think to vent. I'm goin' to bed.

  2. #2
    Master Apprentice phantomotap's Avatar
    Join Date
    Jan 2008
    Posts
    4,406
    O_o

    Your problems are really your own fault for not considering that "source package managers" may already exist.

    Soma
    Epy likes this.
    “Often out of periods of losing come the greatest strivings toward a new winning streak.” -- Fred Rogers
    “Salem Was Wrong!” -- Pedant Necromancer

  3. #3
    Registered User manasij7479's Avatar
    Join Date
    Feb 2011
    Location
    Kolkata@India
    Posts
    2,523
    Quote Originally Posted by MutantJohn View Post
    As a matter of fun fact, it takes literally an hour to install gcc from source. .
    What sort of 'supercomputer' is that?
    It takes about 15 mins on my desktop.
    Hodor likes this.
    Manasij Mukherjee | gcc-4.9.2 @Arch Linux
    Slow and Steady wins the race... if and only if :
    1.None of the other participants are fast and steady.
    2.The fast and unsteady suddenly falls asleep while running !



  4. #4
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,578
    All this because you wanted lambdas. One of the first things you start learning about moving your code between machines is that portability usually requires you limit your access to language features, technologies and their versions. Your decision to use a brand new language feature without first analyzing if it would fit your hardware resources is what failed. Don't blame it on Linux.

    Anyways, unless you absolutely must have anonymous inline functors, there's no reason you should use lambdas in your code (considering the limitations of your uni computer). And you will have a chance to learn a thing or two about the old and trusty function objects.

    You may disagree. It's perfectly fine. But then... well, you already know what it means to try and pack your pretty lambdas into a machine you can't even sudo. Seems to me -- and I'm guessing (not really) -- it takes less work to change those lambdas then it does to mess with your uni computer.

    (also, doxygen requirements can be removed if you don't install the documentation. Which you don't really need for this setup)
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  5. #5
    Registered User MutantJohn's Avatar
    Join Date
    Feb 2013
    Posts
    1,279
    phantom : I tried searching for non-root package managers and couldn't really seem to find any. I know CentOS has yum, I just can't use it. Unless you happen to know of a manager that doesn't require sudo...

    manasij7479 : Lol I have no idea why it took so long. I went out for a walk with my girlfriend for awhile and came back and it was still compiling.

    Mario : I just wanted to be kewl! And plus, I use lambdas because the boost threadpool uses either boost::bind or lambdas or I guess functors or w/e they're called for scheduling functions with arguments. And since when is the 2011 standard "brand new"? Aren't they working on c++14 right nwo?

  6. #6
    and the hat of copycat stevesmithx's Avatar
    Join Date
    Sep 2007
    Posts
    507
    CentOS isn't a "Source-based" distro. If you are talking about the distro's ability to compile its applications/software from source, then all distros have this capability.
    A distro is "Source-based" if the distro itself can be built from source. Gentoo, for example, is "Source-based".
    I have used CentOS until recently before I switched to Arch and I can tell that it is is an extremely stable distribution(the reason you found an older version of gcc) contrary to Arch which is bleeding edge.
    Last edited by stevesmithx; 11-19-2013 at 01:54 PM.
    Not everything that can be counted counts, and not everything that counts can be counted
    - Albert Einstein.


    No programming language is perfect. There is not even a single best language; there are only languages well suited or perhaps poorly suited for particular purposes.
    - Herbert Mayer

  7. #7
    Registered User whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    7,761
    And since when is the 2011 standard "brand new"? Aren't they working on c++14 right nwo?
    Please don't fall into that trap. Standardization itself is a slow process, and decisions can change easily. I would be surprised if the new Standard was actually finished in 2014; who knows when the first version of GCC will come out with a decent feature set. Not complete, mind you, but decent. Then you will have the same problem that you do now.

  8. #8
    Registered User
    Join Date
    Oct 2006
    Posts
    2,588
    Quote Originally Posted by MutantJohn View Post
    I went out for a walk with my girlfriend...
    everyone knows that programmers don't have girlfriends
    Epy likes this.
    Code:
    namespace life
    {
        const bool change = true;
    }

  9. #9
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Posts
    23,010
    Quote Originally Posted by whiteflags View Post
    Please don't fall into that trap. Standardization itself is a slow process, and decisions can change easily. I would be surprised if the new Standard was actually finished in 2014; who knows when the first version of GCC will come out with a decent feature set. Not complete, mind you, but decent. Then you will have the same problem that you do now.
    C++11 was a slow process because they took at least 6 years (probably more) with half-baked implementations everywhere.
    C++14 is different. The number of problems is drastically reduced compared to C++11 (Herb mentioned it in the C++ Native keynote), plus we already have pretty much two good implementations. GCC is well on the way (source: C++1y/C++14 Support in GCC - GNU Project - Free Software Foundation (FSF)), but Clang has already implemented C++14 (draft) entirely (source: Clang is (draft) C++14 feature-complete! : Standard C++).
    Then there's also the fact that the standard committee really wanted to put all features into the language in one fell swoop. Libraries, core features, etc. This time around, the committee is putting more focus on out-of-band TS releases (again, Herb's Going Native keynote).
    I don't think we'll have to wait long. Visual C++ is also catching up pretty good. The current release (CTP) covers roughly 70% of C++11/14 (source: Visual C++ Compiler November 2013 CTP | Sutter’s Mill).
    C++ is accelerating. There is no doubt about that.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  10. #10
    Registered User MutantJohn's Avatar
    Join Date
    Feb 2013
    Posts
    1,279
    I think I'm only complaining because I'm spoiled by having root privileges on my home system.

    I'm also too used to Arch being bleeding edge and having the latest software installed. But seriously, 4.4? Who uses the version of gcc still? "Well, obviously your school's computer does." Yeah...

    I still kind of wanna send an email to the admins asking for either root privileges so I can yum stuff in or that they start incorporating more into the computer for all users.

    Namely, I think we can all (the users of the computer, I mean) benefit from an updated version of gcc (which is supposed to fully backwards compatible, isn't it?), the Boost libraries, common libraries used in scientific computing such as FFTW, GSL, LAPACK. I mean, this computer is meant for running scientific applications so I don't get why they wouldn't want to include this stuff from default.

    And doing so would be incredibly easy because they can actually use yum. There's already a lot of stuff on there but it defeats the point, in my opinion.

    It would also increase compatibility between a user's home version and the supercomputer version because I develop first in my comfortable DE and then test everything for real on their machine. But I don't know, I don't manage a supercomputer so...

  11. #11
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,578
    I don't don't know CentOS, but most distros that aren't source base or bleeding edge (this is usually any distro whose targets include the enterprise market), tend to be very conservative about their base or stable repositories. They simply follow the rule that you only update some package after it has been thoroughly tested and proved stable. The main goal is to proved a stable running environment. Other repositores (testing, etc) may include bleeding edge packages. Your university may be following this principle even if the distro doesn't.

    Another thought to keep in mind is that Universities are, for good reason, particularly careful about cutting edge versions. You usually need to go to their labs to get access to them. The publicly available environment is otherwise pretty old and, consequently, able to serve a larger variety of projects without fear of breaking backwards compatibility with already existing users.
    whiteflags and laserlight like this.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  12. #12
    Registered User MutantJohn's Avatar
    Join Date
    Feb 2013
    Posts
    1,279
    You know what I say to that philosophy? Bow to the master Arch Linux race!

    And also, boooooooooooo! Boo!

  13. #13
    Registered User Hodor's Avatar
    Join Date
    Nov 2013
    Posts
    650
    Quote Originally Posted by manasij7479 View Post
    What sort of 'supercomputer' is that?
    Possiby 5 Motorola 6502s working in parallel.

  14. #14
    Registered User MutantJohn's Avatar
    Join Date
    Feb 2013
    Posts
    1,279
    Oh har har. That was so funny I forgot to laugh... Oh wait, I didn't.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Arduino Simulator Open Source CPP Dev based, Free
    By Paulware in forum Projects and Job Recruitment
    Replies: 5
    Last Post: 12-10-2012, 11:34 PM
  2. Replies: 5
    Last Post: 11-25-2011, 06:45 PM
  3. Can i post full source?
    By Terran in forum C++ Programming
    Replies: 2
    Last Post: 05-23-2008, 08:06 PM
  4. nix distros
    By GSLR in forum A Brief History of Cprogramming.com
    Replies: 9
    Last Post: 04-13-2003, 09:28 PM
  5. Good Source-Based Linux Distro?
    By mart_man00 in forum Tech Board
    Replies: 2
    Last Post: 01-27-2003, 04:28 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21