Discussion:
Servers, CI, and you!
Evan Kinney
2012-10-03 09:47:08 UTC
Permalink
First of all, sorry I haven't been around on IRC as of late; my job
has been a little crazy.


As I sit here, unable to sleep, I find myself thinking about Adium.
I'm not sure how I feel about this, but I think it's a good thing.

...in any case, here's something I've been thinking about bringing up
for a while:

I'd like to possibly revamp eider and duck.

They're both running long-in-the-tooth versions of CentOS 5 and, to be
quite honest, no one is really sure of CentOS's future given the
nature of the project and the fact that Red Hat keeps making it more
difficult to build their SRPMs into something usable. In my opinion,
this leaves us with two options: the Ubuntu LTS spin (or Debian
Stable, I suppose) or Fedora.

I've always been a Red Hat guy and I basically know RHEL (and, thus,
to some degree Fedora) like <insert something I know everything about
here>, so my natural inclination is to go with Fedora, but I have some
reservations with hosting a public-facing server on a platform that
releases so often and stops supporting releases after 13 or so months.
Ubuntu LTS sounds like a pretty good option to me. Thoughts, anyone? I
suppose we're also limited by the base images that Network Redux
provides for their OpenVZ instances.

As part of this, I'd like to propose we get rid of all the cPanel
cruft that's currently holding up everything on duck. I've never
really been a fan of cPanel (their installer, for instance, is a shell
script they suggest you pipe to bash via cURL that essentially
modifies your system to the point of no return) and, as far as I can
tell, there's nothing we're doing that requires it.

I think we'd be much better served by sticking the config files for
everything in a (non-publicly accessible) hg repo. I'd also like to
bootstrap the servers with Chef so that, if needed, we could spin up a
replacement server very, very quickly and in a consistent fashion.

Also, it looks like duck and eider are two OpenVZ VMs in the same
Network Redux datacenter. duck has 3 cores (any reason for 3?) with
6GB of RAM, and eider has 2 cores with 2GB of RAM. What if we were to
combine those together and just have one larger VM? As long as
everything's properly configured (and given the way things are
currently set up), I can't think of any reason to have two separate
machines. Another alternative would be to split them equally, cluster
them with a pacemaker/corosync stack, and load balance everything with
the help of HAproxy. I have a lot of experience doing that, but I know
it's not exactly the easiest thing to maintain... so maybe simple is
better here, even if we're giving up high availability.

We might also want to look at cleaning up the DNS zones a bit, as
they're a bit of a mess if the current Apache configs are any
indication. What if we had everything use .adium.im, and had all of
the .adiumx.com URLs redirect there instead of serving the content?
This would also make it easier to manage SSL... which is another thing
I plan on making work properly (and has been discussed on here
before).

I'd also like to maybe stick the idea in your head about possibly
trying out Jenkins instead of Buildbot. Jenkins does some really,
really cool stuff in the way of Xcode integration and unit testing
stuff, among other things. It's what I use for all of my iOS/Mac
projects, and it's what we use at work. That's a whole topic in and of
itself, though... and this email is already a novel, so I'll leave
that for another time.

I know this is a lot, but it's stuff I think needs to be done at some
point. Everything would be a *lot* more maintainable, more secure, and
(most likely) significantly faster (especially the Mercurial web
interface). I'm willing to make all this happen, but I'd like to hear
some input and discussion before I put together a formal proposal for
consideration.

Sorry for the epic I wrote here. I should probably go to bed now. :)

/ek



--
Evan M. Kinney, EMT-Paramedic
Officer, NC State University Emergency Medical Services Organization
Director of Public Health and Wellness, EOSSP
+1 919.265.9396 (c) | +1 919.531.2136 (o) | emkinney at ncsu.edu | evan at txt.att.net

P.S.: This is what part of the alphabet would look like if Q and R
were eliminated.
John Bailey
2012-10-03 14:20:50 UTC
Permalink
Post by Evan Kinney
I'd like to possibly revamp eider and duck.
With the cPanel junk you note below, it would probably be easier if NetworkRedux
could provide you two brand new VM's to be properly configured.
Post by Evan Kinney
They're both running long-in-the-tooth versions of CentOS 5 and, to be
quite honest, no one is really sure of CentOS's future given the
nature of the project and the fact that Red Hat keeps making it more
difficult to build their SRPMs into something usable. In my opinion,
this leaves us with two options: the Ubuntu LTS spin (or Debian
Stable, I suppose) or Fedora.
I've always been a Red Hat guy and I basically know RHEL (and, thus,
to some degree Fedora) like <insert something I know everything about
here>, so my natural inclination is to go with Fedora, but I have some
reservations with hosting a public-facing server on a platform that
releases so often and stops supporting releases after 13 or so months.
Ubuntu LTS sounds like a pretty good option to me. Thoughts, anyone? I
suppose we're also limited by the base images that Network Redux
provides for their OpenVZ instances.
In my opinion, using Fedora on a server is highly irresponsible. As you
mention, each release gets only 13 months of support, and there are often new
releases of software added in; in a server environment this is far from ideal.
On a server you generally want things not to change very much with updates until
a new OS release happens. RHEL/CentOS is better about this.

Speaking from experience, we've (Pidgin) been mostly happy with our Debian
Stable-based VM's (our main complaints are that they're OpenVZ VM's instead of
something sane like Xen or real hardware). There's not much difference between
the two, except that Ubuntu, in my experience, tends to patch packages more than
Debian does. That said, however, those of us who do the administration work for
Pidgin's servers are far more comfortable on a Debian box than on a RedHat-style
box.
Post by Evan Kinney
As part of this, I'd like to propose we get rid of all the cPanel
cruft that's currently holding up everything on duck. I've never
really been a fan of cPanel (their installer, for instance, is a shell
script they suggest you pipe to bash via cURL that essentially
modifies your system to the point of no return) and, as far as I can
tell, there's nothing we're doing that requires it.
cPanel should die ASAP. All it ever does is get in the way of people who know
what they're doing.
Post by Evan Kinney
Also, it looks like duck and eider are two OpenVZ VMs in the same
Network Redux datacenter. duck has 3 cores (any reason for 3?) with
6GB of RAM, and eider has 2 cores with 2GB of RAM. What if we were to
combine those together and just have one larger VM? As long as
everything's properly configured (and given the way things are
currently set up), I can't think of any reason to have two separate
machines. Another alternative would be to split them equally, cluster
them with a pacemaker/corosync stack, and load balance everything with
the help of HAproxy. I have a lot of experience doing that, but I know
it's not exactly the easiest thing to maintain... so maybe simple is
better here, even if we're giving up high availability.
One other option you might want to consider is having one VM be nothing but a
database server, and the other VM be the frontend stuff (trac, the xtras site,
etc). NetworkRedux can give you a private VLAN to isolate this traffic and keep
it off the public network. In some cases this can make trac significantly faster.
Post by Evan Kinney
We might also want to look at cleaning up the DNS zones a bit, as
they're a bit of a mess if the current Apache configs are any
indication. What if we had everything use .adium.im, and had all of
the .adiumx.com URLs redirect there instead of serving the content?
This would also make it easier to manage SSL... which is another thing
I plan on making work properly (and has been discussed on here
before).
Redirects are cheap enough that I have to agree with you here, but I have no
knowledge of how all the stuff you guys run works, so take that with a grain of
salt.
Post by Evan Kinney
I know this is a lot, but it's stuff I think needs to be done at some
point. Everything would be a *lot* more maintainable, more secure, and
(most likely) significantly faster (especially the Mercurial web
interface). I'm willing to make all this happen, but I'd like to hear
some input and discussion before I put together a formal proposal for
consideration.
There are a couple things I'd like to discuss with you off-list about monitoring
the servers.

John

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://adium.im/pipermail/devel_adium.im/attachments/20121003/39df8ed1/attachment.bin>
Matthew
2012-10-03 19:27:10 UTC
Permalink
Post by Evan Kinney
They're both running long-in-the-tooth versions of CentOS 5 and, to be
quite honest, no one is really sure of CentOS's future given the
nature of the project and the fact that Red Hat keeps making it more
difficult to build their SRPMs into something usable.
In my opinion, this leaves us with two options: the Ubuntu LTS spin

(or Debian Stable, I suppose) or Fedora.


Are there any recent reports of this? The only information I've seen is
over a year old. While CentOS certainly had a problem getting 6 out the
door, updates to CentOS 6 have been out on the same day as the same updates
on my RHEL6 box. Not that you suggested it, but I also have a SL5 systems
that sees updates 2-5 days after my CentOS 5 systems.

As far as I can tell, CentOS has very effectively overcome any problems
working with the RedHat SRPMs.

Yes, CentOS 5 was released a while, ago, but that's a feature of an
Enterprise distribution, not a reason to change. Are the packaged versions
too old for something we need, or are there actual benefits to be gained
from a newer distribution?

Unless we actually need a bleeding edge package, Fedora would be a bad idea
for us. Not only does it quickly lose support, but it really isn't properly
vetted for a system that's sitting on a remote server and maintained by
volunteer admins. I don't know Ubuntu or Debian, but another enterprise
option would be SuSE.
--
Matthew
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://adium.im/pipermail/devel_adium.im/attachments/20121003/3bc6784b/attachment.html>
Evan D. Schoenberg, M.D.
2012-10-03 22:54:56 UTC
Permalink
Post by Evan Kinney
Also, it looks like duck and eider are two OpenVZ VMs in the same
Network Redux datacenter. duck has 3 cores (any reason for 3?) with
6GB of RAM, and eider has 2 cores with 2GB of RAM. What if we were to
combine those together and just have one larger VM?
I'm not aware of any reason to maintain 2 separate systems.
Post by Evan Kinney
I have a lot of experience doing that, but I know
it's not exactly the easiest thing to maintain... so maybe simple is
better here, even if we're giving up high availability.
Simple simple simple. Let's definitely avoid overcomplicating the setup - especially given that our downloads, which are the only high-bandwidth section of the site, are hosted elsewhere, we definitely don't need to prepare for nuclear holocaust, and I do think we should stick with stable OS releases unless there's a really good reason to do otherwise.
Post by Evan Kinney
I'd also like to maybe stick the idea in your head about possibly
trying out Jenkins instead of Buildbot. Jenkins does some really,
really cool stuff in the way of Xcode integration and unit testing
stuff, among other things. It's what I use for all of my iOS/Mac
projects, and it's what we use at work.
Buildbot was what existed and worked pretty well, and what a couple folks were comfortable setting up, a couple years ago. i don't believe anyone has a religious commitment to it as our automated build system. Colin did a lot of its configuration (with help from several others) - guys, any objections?

-Evan


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://adium.im/pipermail/devel_adium.im/attachments/20121003/da513d4d/attachment-0001.html>
Evan D. Schoenberg, M.D.
2012-10-03 22:55:44 UTC
Permalink
Post by Evan Kinney
We might also want to look at cleaning up the DNS zones a bit, as
they're a bit of a mess if the current Apache configs are any
indication. What if we had everything use .adium.im, and had all of
the .adiumx.com URLs redirect there instead of serving the content?
This would also make it easier to manage SSL... which is another thing
I plan on making work properly (and has been discussed on here
before).
*.adiumx.com is purely legacy and should not be appearing in shipping code nor live pages. It absolutely should be a simple redirect to *.adium.im.

-Evan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://adium.im/pipermail/devel_adium.im/attachments/20121003/a708ac18/attachment.html>
Loading...