From Fedora Project Wiki

Meeting of 2007-01-11


*** Time shown in EST

15:02 -!- mmcgrath changed the topic of #fedora-admin to: Role call
15:02 <@mmcgrath> who's here?
15:02 < abompard> me :)
15:02  * f13
15:02 < jcollie> me!
15:03 < mdomsch> +1
15:03 <@mmcgrath> Alllrighty.
15:03 -!- fraggle_ [n=fraggle@bea13-2-82-239-143-199.fbx.proxad.net]  has joined #fedora-admin
15:03 < skvidal> fine
15:03 <@mmcgrath> We'll be working from http://fedoraproject.org/wiki/Infrastructure/Schedule as always
15:04  * warren here
15:04 <@mmcgrath> abadger1999: Is it you or dgilmore thats the project lead for the package database?
15:04 < abadger1999> I should be.
15:04 <@mmcgrath> Go ahead and change that when you get a second.  Any progress to report?
15:05 < abadger1999> I'm working on integration with fas at the moment.
15:05 < abadger1999> lyz's report of fas2 progress has me wondering whether to put things off or not but...
15:06 <@mmcgrath> Yeah, I need to sit down and examine how hard its going to be to convert everything over to the new system.
15:06 <@mmcgrath> I suspect not terribly hard, we'll get a great deal of work done at FUDCon.
15:06 < abadger1999> did anyone add to the fas2 wiki page?
15:06 < abadger1999> I didn't see any wiki mail on it.
15:06 -!- daMaestro [n=jon@fedora/damaestro]  has joined #fedora-admin
15:06 < abadger1999> Nope.  Doesn't look like: http://www.fedoraproject.org/wiki/Infrastructure/AccountSystem2/LegacyApps
15:07 <@mmcgrath> I haven't.
15:07  * f13 skips to the bathroom.
15:07 < abadger1999> Basically, I've used website.py and think it's a pain.
15:07 < abadger1999> But it's what we have.
15:07 <@mmcgrath> yeah
15:08 < abadger1999> I'd be happy to drop it but until we know how many apps need to be ported to ldap, it's premature to make that commitment.
15:08 <@mmcgrath> Yeah, I have a feeling that FUDCon will bring a lot of what we need to do to light.
15:08 <@mmcgrath> So the VCS decision has been pushed off till 8 comes out so very little has been done with it.
15:09 < abadger1999> Anyhow -- I'm coding a turbogears identity/visit manager that talks directly to the current FAS db.
15:09 <@mmcgrath> abadger1999: sounds good.
15:09 < abadger1999> I'll share more as I progress.
15:09 <@mmcgrath> iWolf isn't here, lmacken isn't here so we'll skip the db upgrade and firewall.
15:09 <@mmcgrath> warren: how's smtp?
15:10 < warren> mmcgrath, I removed logwatch, I thought I knew exactly the config I wanted, but couldn't figure it out, and got diverted to something else.
15:10 <@mmcgrath> no worries
15:10 < Sopwith> abadger1999: Hey, I think I have code that does the FAS talking stuff already... I should dig it up.
15:10 <@mmcgrath> Sopwith: if you can find it that'd be great.
15:11 < abadger1999> Sopwith: Cool.  Send it!
15:11 <@mmcgrath> We'll skip config management for the moment.
15:12 <@mmcgrath> I don't know if you guys heard but we have had 1 million unique ip's contact the mirror list server for FC6.
15:12 <@mmcgrath> woot.
15:12 < Sopwith> wow
15:12 <@mmcgrath> As such I've been working on the further metrics system as far as hardware profiling goes.
15:12 <@mmcgrath> Here's the stats page right now - http://publictest4.fedora.redhat.com/stats
15:13 <@mmcgrath> I'll be packaging the server and client for extras soon and try to get it into rawhide.
15:13 < f13> yeah, thats a lot!
15:13 < warren> cool
15:13 <@mmcgrath> This will be step one in what may become an automated 'problem reporter' different people have different ideas as to what this should become but for me, round one is a hardware profiler.  Lets see whats out there.
15:14 < abadger1999> mmcgrath: Any chance of us getting lutter and seth to give us a comparison of config mgmt at fudcon?
15:14 < skvidal> hahah
15:14 <@mmcgrath> Its possible, will lutter be at fudcon?
15:14 < skvidal> is it really worth it
15:14 <@mmcgrath> I think it'd be a waste of time though.
15:14 < skvidal> mmcgrath: +1
15:15 < abadger1999> Case of pick one and see how it goes?
15:15 <@mmcgrath> Yeah, I'm going to be reviewing them and send a letter to the list.  If all goes well we'll just use it.
15:16 <@mmcgrath> It should be terribly difficult to change to something else down the road if we need to.
15:16 <@mmcgrath> I'd say days work tops.
15:16 < lutter> mmcgrath: abadger1999: yeah, I'll be at fudcon
15:16 -!- Rep0rter [n=thianpa@unaffiliated/reporter]  has quit [] 
15:16 <@mmcgrath> But there's so much else to focus on I'd just assume we keep attention on the other stuff.
15:17 <@mmcgrath> f13: how's project hosting going?
15:18 <@mmcgrath> f13: we'll come back :-P
15:18 <@mmcgrath> lyz has been working on the account system rewrite and enhancements, we'll be spending a lot of time on it during Fudcon.
15:19 <@mmcgrath> even for those of you that can't come to fudcon might do well to monitor the chat room because we may need to pass out tasks and things.
15:19 <@mmcgrath> At the end of FUDCon I'd like to have a functional system up and running and have at least some of our services with the ability to run off of it (even if we don't actually use it yet)
15:20 <@mmcgrath> kimo and paulobanon are gone so we'll skip some of the web stuff.
15:20 <@mmcgrath> Oh!
15:20 <@mmcgrath> mdomsch: how's the mirror management stuff going?
15:20 < mdomsch> coming right along
15:20 <@mmcgrath> Any word from farshad?
15:20 < mdomsch> I've got "accurate" lists of who's hosting core and extras now
15:21 < mdomsch> and adrianr sent some code I need to look at for the client side with xmlrpc transfer back to the server
15:21 < mdomsch> no farshad sightings this week
15:21  * mdomsch added 17 new mirrors to the extras-6 list today
15:21 < mdomsch> gonna need some serious help with the turbogears integration at some point
15:21 <@mmcgrath> Super!
15:22 < mdomsch> I've just been working with sqlobject essentially
15:22 <@mmcgrath> no worries, we've got some tallent here IIRC.  I'm ok and can help you.
15:22 < mdomsch> goal is for something usable by test2
15:22 < mdomsch> to work out the kinks
15:22 <@mmcgrath> Awesome.
15:22 < mdomsch> that's all for now
15:23 <@mmcgrath> Another thing that was added recently by jcollie was Mirrorman.
15:23 < jcollie> uh, mirrors on the mind there?
15:23 <@mmcgrath> jcollie: sorry mailman :-P
15:23 <@mmcgrath> yeah.
15:23 <@mmcgrath> so give us a roundup of whats going on and what might need to be done.
15:23  * mdomsch thought he'd been snaked
15:24 < jcollie> basically it came up on the lists after thl posted his note about renaming the mailing lists
15:25 < jcollie> several people suggested rather than merely renaming the fedora lists hosted at redhat, that we move the lists to a lists.fedoraproject.org domain
15:26 <@mmcgrath> My only concern is that the list stuff is one of those things that people expect to "just work" and it's not broken right now.
15:26 < skvidal> right
15:26 <@mmcgrath> But if there's enough drive or desire for it, we'll host it.
15:26 < skvidal> but we're going through a transition
15:26 < skvidal> in terms of the mailing lists anyway
15:26 < skvidal> and chances are it will take some time to sort them out during it
15:26 < skvidal> so it might make sense to make the shift now - rather than have to make another one, later
15:26 < skvidal> my questions about it, though
15:27 < skvidal> what sort of tricks has rh hacked into their mailman instance to make it go fast enough for 8 trillion subscribers?
15:27 < skvidal> what sort of things would we be losing by not having the rh infrastructure to rely on
15:27 <@mmcgrath> good question and I have no idea.
15:27 <@mmcgrath> Perhaps I can find out who it is and have a talk with them after I start.
15:28 < Sopwith> It doesn't sound like there's much to indicate that moving the lists is a high priority.
15:28 < jcollie> yeah, that's the real question... do we have enough horsepower to run the lists without it getting behind
15:29 <@mmcgrath> Sopwith: probably not.
15:29 <@mmcgrath> jcollie: no idea, I have no requirements for what the current lists need to run.
15:29 < skvidal> the lists are going to be reorganized
15:29 < skvidal> we know that much
15:29 < f13> crap, sorry, I got cube camped
15:29 < f13> ready to come back tome?
15:29 < skvidal> the moving of the lists to another server is the only question
15:30 <@mmcgrath> f13: yep, right after we get done talking about the mail stuff
15:30 < jcollie> yeah, that way the pain would only happen once
15:30 < f13> k
15:30 < jcollie> plus having it on "lists.fdp.o" would be a marketing plus
15:30 <@mmcgrath> I'd mentioned just doing a clean cut. Leaving the old stuff there and bringing the new stuff to us but I didn't see much of a response to that.  I think people want the same lists.
15:31 < jcollie> we could brand the list archives with a fedora website look
15:31 <@mmcgrath> One thing we need to discuss soon is account expiration and resource re-allocation.
15:31 <@mmcgrath> Lets say that we host the list.fp.o stuff and people flood for lists.
15:31 < warren> wait.. why handle lists separately from RH?  are the benefits listed anywhere?
15:31 <@mmcgrath> And some of them just don't get used, what will it take for us to release (delete) that list?
15:32 < jcollie> warren, i've started a wiki page here: http://fedoraproject.org/wiki/Infrastructure/Mailman
15:32 < skvidal> jcollie: thanks
15:32 < jcollie> mmcgrath, in theory the archives would have to be kept around a long long time
15:32 <@mmcgrath> uh-oh
15:32 < skvidal> archives are light
15:32 < skvidal> ?
15:32  * mmcgrath hears beepers going off all around me :-/
15:33 < skvidal> nothing here
15:33 < warren> mmcgrath, your patient had a cardiac arrest, run!
15:33 < japj> firealarm?
15:33 <@mmcgrath> I'm just talking about in general.  Like the plone instance at fpserv.fedoraproject.org
15:33 < skvidal> mmcgrath: die die die die die
15:33  * mmcgrath suspects orbitz.com or one of the other websites just died.
15:33 <@mmcgrath> At what point in time do we as a group push back at the community and say "look, this isn't working can we delete it?"
15:33 < warren> hmm.... orbitz.com isn't loading
15:34 < jcollie> ah, who needs plane tickets anyway ;)
15:34 <@mmcgrath> fpserv.fedoraproject.org has been out there FOREVER.
15:34 < warren> jcollie, I'm actually in the process of buying one now, until it stopped working =)
15:34 <@mmcgrath> And as we grow many people will get very ambitious.
15:34 < skvidal> mmcgrath: sounds like something the board would determine
15:34 <@mmcgrath> I'd like to give people every opportunity to succeed.  But I don't want us maintaining and backing up stuff that isn't getting used.
15:34 < skvidal> mmcgrath: ie: when sub-projects and SIGs die
15:35 < warren> Would it be too much for FTC or FPB to approve/remove lists?
15:35 < warren> FI just follows what FTC or FPB says.
15:35 <@mmcgrath> Perhaps we should have an official policy in FI that says when we take stuff to the board for removal.
15:35 < warren> FI will give a list to anyone that asks?
15:36 <@mmcgrath> no, but I'm working on an official 'request' form.
15:36 < warren> might it be better to let a different group that is closer to the developer's daily tasks decide if a list is creation-worthy?
15:36 <@mmcgrath> so that whenever resources are requested we get a proper project plan with ownership and such.
15:36 <@mmcgrath> We'll probably do that through the wiki, no exceptions.
15:37 < skvidal> mmcgrath: yes, just removal - not addition
15:37 <@mmcgrath> skvidal: to the board you mean?
15:37 < skvidal> mmcgrath: yes
15:37 <@mmcgrath> I'll get that up soon.
15:37 <@mmcgrath> So anyway in the meantime, mailman.
15:37 <@mmcgrath> Can I get a quick vote from those here.  +1 -1 or 0?
15:38 <@mmcgrath> for hosting it on our infrastructure sometime in the next few months?
15:38 -!- mmcgrath changed the topic of #fedora-admin to: Vote!
15:39 < skvidal> mmcgrath: +0.3 or so
15:39 < skvidal> it's not critical that we run it
15:39 < skvidal> but I wouldn't be pissed if it happened :)
15:39 < skvidal> but
15:39 < jcollie> +1 (dunno if I have a vote though)
15:39 < abadger1999> mmcgrath: +0.5
15:39 <@mmcgrath> jcollie: those that show up, have a vote.
15:40 < abadger1999> We had some people who volunteered to help run it.
15:40 < skvidal> I think that's contingent on an answer about needed infrastructure
15:40 < mdomsch> +0.5
15:40 < abadger1999> But I don't know that there's a compelling reason to change.
15:40 <@mmcgrath> Ok, so what I'm detecting is a 'sort of' from us.  In which case I say that we do the project but don't drive it.
15:40 < skvidal> an overwhelming 'eh' from the crowd, yah
15:40 <@mmcgrath> We'll sit back and let the FAB or developers make the final decision and make the final request to actually do it.
15:40 < skvidal> sounds like a plan
15:41 <@mmcgrath> k, f13: you're up!
15:41 <@mmcgrath> Hosting!!
15:41 < skvidal> mmcgrath: though finding out from the rh is folks what they do now is good
15:41 < skvidal> mmcgrath: please
15:41 <@mmcgrath> <nod>
15:42 < skvidal> f13: ping
15:43 <@mmcgrath> buhahah, ok.  We'll open up the floor until f13 gets back.
15:43 -!- mmcgrath changed the topic of #fedora-admin to: open floor
15:43 <@mmcgrath> anyone have anything they wish to discuss?
15:43 < daMaestro> status of ldap based accounts system?
15:44 <@mmcgrath> We have one up and running on one of the test servers, we'll be hitting it heavily during FUDCon.
15:45 < daMaestro> hmmm, awesome. i'd like to get a test plone instance up and running for fudcon
15:45 < daMaestro> i have xen machine test7 and just need to get the ldap bits into extras (for plone).. if i get that setup.. is there anything else i will need to do? will we all just have to tunnel through baston for access?
15:46 < jcollie> what about my idea for more secondary nameservers for fp.o, esp if we can get some on other continents
15:46 <@mmcgrath> jcollie: we can do that, its an issue of volunteers and trust.
15:46 -!- lyz [n=lyz@dsl081-149-006.chi1.dsl.speakeasy.net]  has joined #fedora-admin
15:46 <@mmcgrath> we have a pretty low budget.
15:46 < jcollie> i think that DNSSEC can take care of most concerns about security
15:46 <@mmcgrath> lyz: yo!
15:46 < daMaestro> are there accounts setup on it? may i request a group be setup for admin access to the test plone instance? who do i contact about getting test7 attached to the test ldapserver?
15:46 < lyz> yo!
15:46 <@mmcgrath> damaestro was just asking about the new account system.
15:46 < jcollie> mmcgrath, yeah i was suggesting strictly volunteers
15:47 < lyz> good timing eh?
15:47 < daMaestro> lyz, bravo
15:47 <@mmcgrath> jcollie: ahh
15:47 <@mmcgrath> f13: back yet?
15:48 < Sopwith> editdns.net will do free backup dns for you if you want.
15:48 < lyz> daMaestro, the LDAP server is set for anonymous access at the moment
15:48 <@mmcgrath> Sopwith: cool, we'll have to take a look.
15:48 < daMaestro> lyz, ok.. i'd like to get a test plone site up and running for FUDCon
15:48 < jcollie> i think someone that works for the french cctld might be able to help
15:48 < lyz> just email me if you need anything
15:49 < f13> yeah, I"m back
15:49 < daMaestro> lyz, we should colloborate to make that happen (basically i need to learn the schema and/or we need to add in plone specific stuff)
15:49 < f13> jesus, I should work ffrom home.
15:49 < jcollie> daMaestro, there's already a plone site up fpserv.fedoraproject.org
15:49 < daMaestro> normally we don't need to add anything
15:49 < daMaestro> jcollie, yes.. that is plone 2.1 (no PAS) zope 2.8.9 (iirc)
15:49 < lyz> daMaestro, drop me a line at lyz27@yahoo.com and will work it out
15:49 < jcollie> yeah working from home would be awesome
15:49 <@mmcgrath> we've got plone sites all over the place.  I just set another one up for the doc's guys last night.
15:50 < daMaestro> jcollie, we should target plone 2.5.1 and zope 2.9.6 (which are currently in extras)
15:50 < f13> I havev the ability, I just needed the hardware in my cube.
15:50 < f13> ok, I'll sit here and wait (:
15:50 <@mmcgrath> f13: Ok!  you're up
15:50 -!- mmcgrath changed the topic of #fedora-admin to: Hosting
15:50 < f13> ok.
15:50 < f13> so.
15:50 < f13> I like the idea of moving hosting to another site, say a nice university that wants to help out, or OSDL or whatever.
15:51 < f13> a clear seperation from our package and build infrastructure
15:51 < f13> things I see needing to happen regardless:
15:51 <@mmcgrath> f13: nod.
15:51 < f13> Split git/hg/svn off of cvs-int, put them into a new beefy box that is able to run a few xen paravirt instances.
15:51 < f13> one SCM per paravirt guest, and finally a guest for Trac
15:51 <@mmcgrath> k
15:52 < jcollie> f13, are you using sqlite for the trac db?
15:52 < f13> each SCM guest makes use of maybe nfs storage for the SCM, easy to grow the FS perhaps.
15:52 < f13> jcollie: yes
15:52 < f13> Trac guest could nfs mount (Read only) the nfs dir that has all the SCM content on it for direct repo access
15:52 < jcollie> i don't know trac that well, but i was thinking that it'd be pretty neat to use puppet to manage the projects
15:53 < f13> jcollie: I'm not entirely sure what that buys us, but we'll come back to that later.
15:53 < warren> jcollie, different topic
15:53 <@mmcgrath> f13: I think thats doable, it'll take architectural changes on our part but its something thats on my agenda.
15:53 < jcollie> yeah yeah :)
15:53 < f13> The second thing we need for Hosting project is raw webspace.
15:53 <@mmcgrath> how much do you think we'll need initially.
15:53 < f13> maybe it's a 5th guest for login.hosted.fedoraproject.org or something, which has NFS mounted filesystem that is the webstore, users can log in, do things with their homedir orwhatever
15:54 < f13> yet another guest or system runs apache and serves up the content, but no users can log into this box
15:54 <@mmcgrath> mhmm
15:54 < f13> mmcgrath: honestly I think 1TB total would do fine for a long while.
15:54 <@mmcgrath> hah!
15:54 < jcollie> are we going to allow shell access or strictly file upload/download
15:54 <@mmcgrath> I have no idea what we have in terms of space or purchasing more.
15:55 < f13> one TB at /mountpoint/  and then from /mountpoint/ we have git/ hg/ svn/ trac/ web/
15:55 < jcollie> i guess shell access could be useful for running createrepo
15:55 < mdomsch> mmcgrath, need to understand that from RH IS about the netapps
15:55 < f13> that way we don't have to guess at how much space for each service, there is one large pool shared by all, and perhaps some smart quota work
15:55 < mdomsch> else we're looking at buying/getting donated some serious hardware
15:55 < f13> well, quotas necessary for webspace.
15:55 <@mmcgrath> mdomsch: I'd like to know that too.
15:55 < warren> An entire paravirt guest could be exclusively for shell access.  That entire guest could have restrictive iptables disallowing outgoing network access.
15:55 < smooge> mmmm netapps
15:56 < f13> jcollie: I"d like to be able to do shell access, but if folks feel that is too risky, than at least ssh for the purpose of rsync or sftp or...
15:56 < mdomsch> 1TB is peanuts, it's the 5TB for the package db (and growing)
15:56 <@mmcgrath> f13: can you write up a 'dream plan' for this and send it to the list?
15:56 < mdomsch> that we start into real money
15:56 < f13> mmcgrath: I think I can.
15:56 < warren> f13, I think shell access can be done in a secure way, but we can worry about that later.  At first sftp only would be easy to do.
15:56 < f13> nod.
15:56 < mdomsch> http://fedoraproject.org/wiki/InfrastructurePrivate/EquipmentWishlist
15:56 <@mmcgrath> f13: work, we'll see what you have in mind and then try to accomidate.
15:57 < f13> then we would need some time from the art folks for a better theme for Trac (:
15:57 < f13> and maybe some love from webdevs to create a tool to create new trac instances and whatnot.  Doing it by hand is OK for 1.0 I think.
15:58 <@mmcgrath> yep
15:58 <@mmcgrath> Is that all we have for the meeting right now?  We're right at the hour mark.
15:58 -!- mether [n=ask@fedora/mether]  has quit [Read error: 104 (Connection reset by peer)] 
15:58 < mdomsch> f13, while we're waiting on xen guests and storage for hosting
15:59 < mdomsch> can I run gitweb on cvs-int?
15:59 < mdomsch> or not gitweb, gitd to export the trees anonymous ro
15:59 < mdomsch> (trac has a gitweb-like viewer already)
15:59 -!- abompard [n=gauret@bne75-8-88-161-125-228.fbx.proxad.net]  has quit [Remote closed the connection] 
16:00 < f13> mdomsch: yeah, some of the hosted projects already use git, so they have a browser
16:00 < f13> mdomsch: I don't mind trying to setup gitweb on cvs-int, but it may be tricky tricky as to not conflict with hg web and cvsweb and...
16:00 < mdomsch> more worried about gitd really
16:01 <@mmcgrath> hmm
16:01 <@mmcgrath> mdomsch: sure.
16:01 < warren> hosted projects should really split their hg/git/svn/cvs from cvs-int
16:01 < mdomsch> would need to vhost git.fedoraproject.org to run gitweb is all - right now all I see is svn trees :-)
16:01 <@mmcgrath> We should have a storage solution put together for our SCM system.
16:01 < warren> It is good that we tested it on cvs-int, but for security reasons we really need to move it away?
16:01 <@mmcgrath> right now it just doesn't exist.
16:02 < mdomsch> so whatever system is really git.fp.o
16:03 < mdomsch> I just want to add a gitd running there
16:03 < mdomsch> but I can't tell from the wiki which one that is :-)
16:03 < f13> warren: that was in my above list of what needs to happen.
16:03 < f13> <f13> things I see needing to happen regardless:
16:03 < f13> <@mmcgrath> f13: nod.
16:03 < f13> <f13> Split git/hg/svn off of cvs-int, put them into a new beefy box that is able to run a few xen paravirt instances.
16:03 < warren> ah
16:04 < warren> sorry.
16:04 <@mmcgrath> mdomsch: is there a right now need for that or can we wait a bit?
16:04 < mdomsch> a little bit...
16:04 <@mmcgrath> buhaha, I gotta go in a sec - http://www.cheaptickets.com/outage_partner.html
16:04 <@mmcgrath> err http://www.cheaptickets.com/
16:04 <@mmcgrath> mdomsch: k, stay on me though, we'll get something in there soon.
16:04 < warren> meeting end?
16:05 -!- mmcgrath changed the topic of #fedora-admin to: open floor again
16:05 <@mmcgrath> Anyone have any last minute items that can't be taken to the list :-D ?
16:06 <@mmcgrath> alllright
16:06 <@mmcgrath> Mark Meeting End ===================