Posts tagged “news”.

Scholarly Publishing and Authenticated Reviews

First, a review of a neat new tool that provides a cool function for many academics:

GPeerReview is a very simple Open Source tool that lets you write a review of a work, embed a hash of the work in your review, and sign that review with your digital signature (using your GPG key). The last two things are pretty neat. The hash allows you to be sure that people know which version of a paper you reviewed. Or at least, they will know if the version they have matches the version you had. This would be useful in the case where major changes are made to the paper that contradict your review.

Then, signing your review so that the author (and their publisher/advisor/dean/what have you) knows it is actually from you is pretty neat, and an obvious use of gpg. In fact, GPeerReview is essentially just a wrapper around the GnuPG command-line tool (see the FAQ).

I think this is a pretty interesting tool that could have some great uses, especially if we integrate it with the work-flow of academics (somehow). Step one of that implementation would be to move it from the CLI to some sort of Word/OpenOffice.org plugin. Or, even better, would be to provide a web-based service for this.

Crazy Idea
Launchpad for Scholarly Articles and GPeerReview

Going back to my crazy idea of a Launchpad for Scholarly Articles: basically a service that provides users the ability to link published articles, whether open access or not, with pre-prints or author deposited versions in Institutional Repositories. The killer feature of this service would be to provide a way for people who DON’T have access to the expensive scholarly journals a way to read and be informed via the pre-prints written by the authors that are not restricted by the overzealous journal publishers.

Then, add on the ability for readers of those articles to make comments on and provide useful reviews of the material. Even adding this ability to places like arxiv.org would be great; it provides a mechanism to build community. And as we all know, the community is what makes any service an important resource for people. Without community the service is just a collection of tools.

But, I’ll be honest with you, I don’t know all of the various web-based services out there for scholarly communication; maybe someone has already implemented something like this. Leave a comment if you know of anything out there like this.

flattr this!

imapfilter + offlineimap + msmtp + mutt + abook = email

So, I’ve spent a little over a week setting up my new email consumption/creation system. As you can see from the title of this blog post, there are a few parts to it. Why would I do something crazy like edit config files for 4 different apps JUST to read and write email? Well, I wasn’t happy with Thunderbird (yes, I’ll try 3.0 when it hits the repos) and Evolution wasn’t at all what I wanted. I do have gmail so why not just stick with the web interface? Because I am wanting to do more self-hosted solutions for web apps. Also, since I have more than one account, I want different messages to be sorted different and archived differently.

In Thunderbird I had an extension that allowed me to press “y” and the current message would be “archived” to the gmail All Mail folder. This was great, but it only supported one account. If I was reading my work email in Thunderbird (which is also hosted by gmail) and I hit “y” the message would go to my personal gmail account’s All Mail folder, not the work account one. Not good (and a dumb limitation).

So, what email program allows you to have complete control over those types of settings? Mutt. And yes, (Al)pine also. But, I have friends local to me who use mutt so exchanging .muttrc files and such is easier and we can meet in person to share tips.

What I want to do with this blog post, though, is not convince you that Mutt is the best solution for you. I do want to, however, share what I did to set everything up for use with Mutt. In fact, all the rest of the pieces of this setup can work equally well with some like Alpine or even Thunderbird.

(since it is a long post, I didn’t want to spam your reader, click for the rest of it)
More… »

flattr this!

Web Presence Up-Keep

So, a part of being a domain-owning, server-space-using, web-software-running, open-source-promoting person one needs to periodically update software to latest versions and change software to meet ever changing situations and goals.

In short: I’ve made some changes around here that hopefully you have not noticed[0].

First – I finally upgraded to WordPress 2.7 (yes, a bit late).  What caused me to take so long? I wanted to change my installation method to using svn so I can just “svn sw” when  a new version is released. In doing so I ran into a minor permissions issue that was preventing me from completing the switch over, but thanks to my buddy (and sysadmin) Asheesh, all is better now. Do you want to have easy upgrades of wordpress via svn, check out this guide. It is a bit wordy and I never have liked their banner, but it outlines things in language for everyone.

Second – Blog Spam. Or more correctly BlogSpam. Since I do try to use Open Source solutions for my all of my needs (see my post on TinyTinyRSS) the use of Akismet was a little, well, sad. But, thanks to an Autonomo.us blog post I found out about BlogSpam.net (I love straight forward software names).

Basically it is a drop-in replacement for Akismet but it is Open Source and even complies with the Open Software Service Definition. So if you are looking to remove one more piece of proprietary software from your webpresence, check out BlogSpam.net. And for those of you who use Drupal there is even a BlogSpam.net plugin for that: check out the plugins page.

[0] – I had a minor hickup that most likely lasted from Jan 16th 3am to 3pm EST. During the re-install process I failed to copy back my .htaccess and thus none of the post were showing up since I use “pretty urls.” Sorry if you were trying to reach a post and couldn’t.

flattr this!

An eventful week

I am now safely back from the Ubuntu Developer Summit in Mountain View after a long week of planning the next 6 months for Ubuntu.

As I said in an identi.ca message: “I am just now realizing how crazy this past week was. You don’t notice it when you are in the middle.”

But now that I am back and able to reflect on what happened I have this to say: WOW! I am really excited about what will be happening in Jaunty and beyond. I am sure that because this was my first UDS I am, on average, more excited than some. It is always inspiring to be in groups of highly productive and intelligent people all working towards the same (or similar) goals. Now that I have this inspiration it is time to see what I can do with it.

First: My personal/work project (I work for Creative Commons): Content producing/playing applications should be “license aware.” WHAT? By that I mean that applications that play media (songs, videos, images) could display the license for the currently playing item. A good example is Banshee. There could be an additional column that shows which license a song is licensed under. Words don’t describe it well, how about a picture:
Banshee with column displaying CC licenses
The really cool part about the above image is that Gabriel Burt added that functionality after the discussion on Monday at UDS about this very topic. He saw my dent that it was being discussed and decided to code it up for Banshee. It apparently only took him 40 minutes (!) to do it. Gabriel is a rock star, pure and simple.

Gabriel also wrote all of the license detection code himself, which he didn’t need to. Creative Commons provides a LGPL licensed library (liblicense) that can read and write license metadata for a variety of file formats (ogg, mp3, pdf, jpg, png, mov, etc). But, Gabriel would have needed to write Mono bindings for liblicense as it is written in C and only has python and ruby bindings right now.

Second: The Jams that various LoCos have been putting on are always a winner. Whenever you get a group of people together who want to learn something new with each other good things tend to happen. The Michigan Team has done Packaging Jams and Bug Jams. There are even thoughts of expanding the idea to other activities (Answer Jams, Translation Jams [wouldn't work too well for US State teams], and such).

Third: Now that we are getting good at putting on events like Jams and release parties we should let others know how we do it! The various LoCo teams are going to start producing some Best Practices when it comes to hosting events and such. Basically, we want every team to know how Mr. 4k and the French LoCo were able to host a release party for FOUR THOUSAND people. Granted, not every team will be able to do something like that in April, but learning how the French LoCo performed marketing would help us all.

Fourth: The Ubuntu Free Culture Showcase is a great opportunity for artists to get their works on MILLIONS of computers worldwide; how can we get more participation in this contest? This is one project which I will be working on with Jono. Ideas: get the news out to other venues that we didn’t get to last time (ie: ccMixter).

I think that should be enough to keep me busy for the next few months. How about you: what projects/ideas really caught your attention at UDS?

flattr this!

Google Book Settlement

This is old news now since it happened over a week ago, however, the continued discussion of this settlement is needed and hopefully welcomed.

I have been silent on this settlement on this site due to a few reasons (full disclosure):

  • I was at the Open Content Alliance’s (OCA) yearly meeting in the Presidio of San Francisco when the settlement was announced. As such, I was privy to the private discussions between members of the OCA and others. I didn’t want to say anything I learned there before they had a chance to say it themselves.
  • I work with a very high level administrator at the University of Michigan Libraries. The UofM Libraries are one of the Google Book “Fully Participating Libraries” and as such have a special relationship with Google. This relationship may cause members of the UofM libraries opinions’ of this settlement to be influenced in one direction or another.
  • I have a personal moral preference to the methods of the Open Content Alliance and feel that some of Google’s Terms Of Use (in the contracts signed with libraries) are less than good.
  • There have been many people saying contradictory things about this settlement; everyone couldn’t be right in their analysis. Just like sunlight is the best disinfectant, time is the best producer of truth.
  • The settlement is one-hundred and forty-one (141!) pages long. This doesn’t include the fifteen (15!) attachments to the settlement. This is part of why so many were making false claims, they just didn’t get to the part that explained what would happen in the situation they were talking about.
  • Plus, I was going to be giving a presentation on the Google Library Project for my class on Intellectual Property and Information Law (PubPol 688/SI 519). I decided to wait until after the presentation to post my views. I could have posted a draft of my presentation before to see what sorts of comments I would receive but to be honest, I wasn’t thinking that far in the future. Graduate School does that to me.

 

Here is the presentation I gave yesterday (2008-11-7):

(.odp, .pdf, .ppt)
Unfortunately, for you, my slides don’t contain all of the information I conveyed (because that presentation style sucks). Fortunately, for the students in the class, my slides didn’t contain all of the information I conveyed.

You will notice that my presentation takes a very hard look at the Settlement; I’m not one to see something like this and think it is the best outcome we could have had. Yes, there are some really great things to the settlement but that doesn’t mean I can’t critique the parts that are bad.

A quick example of one of the really great things the Settlement provides: All “Fully Participating Libraries,” libraries that have signed scanning agreements with Google and have had a sizable percentage of their libraries scanned, will have free access to the entire corpus of books Google has scanned. Not just the books that were scanned at that specific library, but the books scanned at all libraries. So, if you are a student at the University of Michigan, University of California, Stanford, or any of the libraries listed in Settlement Attachment G “Approved Libraries” you can be happy about that.

If, however, you are a student at any other university or college you won’t be as happy. Your school, unless it pays the subscription fee (not yet disclosed), will only be able to have a limited number of “terminals” that can be connected to the Google Library; a more correct term would be the Google Bookstore. Even the UofM’s own Paul Courant said this settlement will create the “Universal Bookstore;” he didn’t say “Universal Library.” But I digress….

These other libraries will have a set number of virtual terminals based on the size of their school (1 per 10,000 students or 1 per 4,000 students, depending on the type of school). These are virtual terminals because the access is restricted to a physical computer. The number of computers which have access to the service is a set number, but the computers with access could vary based on demand to any computer within the library.

Issues that I didn’t go into depth in my class presentation that are none-the-less important include:

  • The effective monopoly on the materials that Google now has. Sure, others could join the game, at the $145 million price tag, but since this was a settlement not a legal decision there isn’t a lot of incentive for groups such as the OCA to go into talks with the AAP and Authors Guild.
  • To continue my digression from above: the fact that this is going to be a “Universal Bookstore” not a “Universal Library” is slightly saddening.
    • I don’t have a legal reason to feel sad; the copyright holders have every right to charge for these materials. But I feel like everyone other than Google, the authors, and the publishers are being scammed. Again, not for a legal reason, but for a moral reason:
    • Libraries, through public funding, have been keeping these books safe for the last 70 years. These books, up until the day of the settlement, have had worthless to the publishers and authors. These books are out-of-print and thus all purchases of them have been paid to individuals base don the first-sale doctrine. Now, Google, through its Universal Bookstore, will sell you these books and pay the authors for them. Google will not pay the Libraries who were the ones who made this whole endeavor possible. Sure, the libraries agreed to only get the digital copies back as part of their agreements with Google, but that was before anyone had thought about this possibility. Should those contracts be renegotiated?<end_rant>
  • What Happened to Fair Use?
    • This could possibly be one of my biggest critiques of this settlement: the pure fact that there is a settlement. This was a copyright infringement case brought against Google by two associations, the Associate of American Publishers and the Authors Guild. Google had a fairly good Fair Use argument and may have indeed won the case based on it. This would have been a GREAT THING (most likely). Others would have the same rights as Google as it pertains to the scanning and displaying of books.
    • Now, however, Google is a “special citizen” in this arena; they have “rights” others do not. Is that fair? No. Is that was is best for our future, and the future of libraries? No.

 

Hopefully I don’t sound too negative towards this settlement. Ok, lets be honest, I am pretty darn negative towards it. But hey, that is my job, at least what I see my job being. There are plenty of people out there being paid a large sum of money to tell you how good this settlement is. The ones who are out there telling you how bad it is are most likely not being paid to do so; I’m not.

If you have read this far and are still interested in this topic, you should check out what the rest of the world has been saying about this settlement. A good place to start would be TechDirt’s opinion on the matter. And, the Open Access News blog has posts that summarize others’ opinions in four parts (1, 2, 3, and 4).

EDIT:
Full Disclosure (thanks to Jon for reminding me): I am employed by Creative Commons and through that work have been involved with the OpenLibrary Project. Also, I am employed by Paul Courant, the Dean of Libraries for the University of Michigan. As thus, there may been some conflicting influences on my opinions. I am in a special dual position.

flattr this!

Ubuntu Bug Day – TOMORROW!

From Bug Squader Dereck Wonnacott:

This week’s target is *drum roll please* Thunderbird!
* 39 New bugs need a hug
* 36 Confirmed bugs just need a review

Bookmark it, add it to your calenders, turn over your egg-timers!
* Thursday August 28th
* http://wiki.ubuntu.com/UbuntuBugDay/20080828

Thats right, your favorite email client is up on the block ready for some triage help. Come out and help us make your emailing life better. Looks like a lot of Thunderbird bugs need some help with reproducing the issue.

Be sure to record your efforts by participating with 5-a-day.

See you in #ubuntu-bugs on Freenode tomorrow!

flattr this!

Last post about the GBJ…

for a little while… I promise.

But, I just wanted to let everyone on Planet Ubuntu know how things went here in Michigan. As some of you may know, I wasn’t able to physically make it to the Michigan LoCo‘s event due to my current internship in San Francisco. That didn’t stop me from participating though! I even set my alarm on a Saturday morning so i could wake up and have breakfast before it started (dang time zones!).

On to the report:

Even though only 7 people (including Jorge and I remotely) were able to participate we were able to squash or otherwise improve upon 54 bugs! I’d say it was a pretty successful day. We had at least 3 people who hadn’t triaged bugs before at all yet our average was over 7 bugs per person! To see how our team did in comparison to others around the world check out the 5-a-day stats page. Also, be sure to check out Craig’s write up of our event on his blog.

Overall the Global Bug Jam was a great success, in my own humble opinion. Not only did we as a community accomplish something amazing by just planning and executing the events but we also did a lot of good work. Daniel Holbach has created a nice image showing the results:

Two things. 1) The image will apparently be updated as needed and 2) WE HAD A GOAL NUMBER?! I didn’t know that!

I think what is going to be really awesome is comparing the results of this first GBJ and the next one we have. You did know that there will be more right? Oh yes, there will be more. So next time, your team should participate too!

flattr this!

Michigan LoCo and the Global Bug Jam

This Saturday the Michigan LoCo Team will be hosting our own event for the Global Bug Jam.

The deets:

More information can be found on our event page.

GO BLUE!

flattr this!

Global Bug Jam, It’s going down!

It’s coming up, the Global Bug Jam.  Are you ready?

Have no fear, your friendly Michigan LoCo team will be hosting a GBJ event in Southeast Michigan where you can come learn the trade of triaging and have a great time doing so.  I know from experience that their Bug Jams are great events.  They even filled a room at Penguicon on the topic thanks to Wolfger.

Come one, come all to the Global Bug Jam, no experience required, only a desire to have fun and contribute.

The Important Information:

Where: Clinton Macomb Public Library (map)
When: 1pm to 6pm on Saturday August 9th
Who: The Michigan LoCo Team and You!

(See THIS PAGE for the latest information)

flattr this!

BugHugDay – This Thursday

This just came across the email, courtesy of Nick Ellery:
——-
This week’s Hug Day will be focusing on Apt! There are currently about
127 New bug reports regarding Apt and we will be focusing on reducing
that number in addition to looking at some outstanding Incomplete and
Confirmed bugs.  We’ll do this by following up with reporters,
documenting test cases, and confirming bug reports.  The event
will be held in #ubuntu-bugs on Freenode. The list of targeted bugs
and tasks is posted at:

https://wiki.ubuntu.com/UbuntuBugDay/20080724

Our goal is to deal with all of the bugs on that list.

So on 24 July 2008, in all timezones, we’ll be meeting in #ubuntu-bugs
on irc.freenode.net for another Ubuntu Hug Day.

https://wiki.ubuntu.com/UbuntuBugDay
—–

So if you have some time and want to help out, come join us!

flattr this!