# Thursday, June 14, 2012

Yesterday I attended a great BOF session on “What’s the path to successful Scrum?” and I came away with a lot of great ideas not only for what Scrum can do, but what can be improved in process we currently have in our team.

It started off with he speakers eliciting from the audience a list of things related to Scrum that they should cover. On the one hand, there were some things that many people wanted to know, such as “how do I get started?” and “What if I am doing almost Scrum?” and on the other hand, there were folks who’d been doing Scrum already and had very specific issues, almost as if they were tuning the process to fit their organizational culture.

The most important thing I learned from the BOF is that if you have a team that’s constantly interrupted, and cannot be left alone to do a bunch of tasks (on a single project) for the duration of a sprint, Scrum is not for you.

This is exactly how we function. Like many IT teams in academia, we have a mix of new application development, maintenance, bug fixes, data  cleanup, requests for information and the worst, “fires” caused by the large number of moving parts inherent in any large organizations. Oh, and of course, IT tasks and features that exists to serve political needs.

Now let’s say we want to implement Scrum. This requires that we allocate a fixed amount of time for a sprint, during which we pick set of tasks to do and commit to doing only those tasks within than time frame. Our team just cannot go for a week without being interrupted by some support issue on a project other than the one being worked on. I can be sure cuz most of our support tickets come from a few legacy applications that are crap and we don’t really do any development on them.

We cannot change our environment for reasons to do with history, institutional inertia, etc. And this means we can’t use Scrum. Most people I’ve talked to (in academia) are in the same boat. And they shouldn’t use Scrum either.

So does that leave no room for people who want to agile in chaotic environments? Nope, apparently that’s what Kanban is for and I’ll talk about that in another post, after I get through the “Introduction to Kanban” session by Steven Borg.

Thursday, June 14, 2012 9:01:30 PM (GMT Daylight Time, UTC+01:00)  #     |  Comments [0]  |  Trackback
# Wednesday, June 13, 2012

I attended a session on Real-World Developer Testing with Visual Studio 2012, that was all about how to tackle nasty testing issues, such as static classes, singletons, hard-coded dependencies, infrastructure boundaries, seams, etc etc.

I had so many questions in my head that I headed up to the speaker David Starr (from Scrum.org and PluralSight) and asked him a couple questions about how to get started with testing and cultural challenges involved.

What I was expecting was a few words about how it’s hard, etc etc. What I got was a totally down-to-earth conversation with a regular guy that opened my mind to a whole new way of thinking about testing and agile. He said:

If you don’t have build automation, none of this fancy testing stuff matters. You need to get build automation first and you’ll see a huge change in your team.

And then about testing, he said:

The biggest mistake we made as an industry is to call it "testing” and label it as something different. There’s no such thing as “testing”. There’s only “development”. If we didn’t have unit-testing frameworks that separated out the process of testing from development so much, we’d be in a different world.

Holy crap, that’s why I had a hard time convincing folks in the past to budget time for testing. People tend to think that “testing” is somehow a luxury, something you do in addition to developing. That gives people (umm…management) a big chunk to simply grab and throw out the window. Don’t give anyone that choice because

Testing is part of development, not a separate task.

I feel quite stupid for not thinking about it this way until now. But David didn’t stop there. He challenged me to go back, write tests for something within 10 days and send him a screenshot about it. David, I accept your challenge.

There’s a BOF session today on “Path to Successful Scrum” by David. I’ll be there. I highly recommend you attend!

IMG_0544

agile | msteched | scrum | testing
Wednesday, June 13, 2012 4:17:37 PM (GMT Daylight Time, UTC+01:00)  #     |  Comments [0]  |  Trackback

Can I say that the Technical Learning Center is my absolute favorite place to be at TechEd? This is where you can talk to experts (including folks from Microsoft and others) one-on-one about specific issues, problems and solutions and work out the solutions on a white board with them. I had a great conversation about application/reporting integration with SQL Server folks. I took this picture of Paul Baker and Jen Stirrup who helped me work through some ideas for Web service integration between SQL Server Reporting Services and MVC applications (we even talked about PHP!).

TLC_SQLBI_thumb[1]

After that I also had a chance to talk about Master Data Services, Data Quality Services and even SharePoint (eek!).

It’s easy to get stuck in a rut going from session to session and not looking elsewhere, but if you’re not spending some time at the TLC, you’re missing out. You don’t have to have a complicated problem to solve. Just stand there and listen to some of the conversations. You’ll learn a lot.

BI | msteched | sqlserver | tlc
Wednesday, June 13, 2012 4:11:30 PM (GMT Daylight Time, UTC+01:00)  #     |  Comments [0]  |  Trackback
# Tuesday, June 12, 2012

To say the TechEd schedule is quite an understatement.

I started out early from the hotel around 6:30 AM because I knew there would be a lot of folks trying to get registered at once. The convention center is just massive, so even getting into the TechEd area takes a few minutes;

Convention_Center

And TechEd is just in the North/South building. There’s other stuff going on in the East/West buildings so the scale of the whole place is quite amazing. This is the 20th anniversary of TechEd and what’s interesting in the choice of icons in that image. You can see “people”, “ideas” and a “globe” and then you have “cloud”. That’s not just for marketing. There’s a heavy focus on all things Azure and a ton of sessions of all kinds about it.

TechEd_Entrance

And good thing I registered on Sunday instead of waiting for Monday morning because this was the scene at registration:

Registration

Frankly, that doesn’t even look too bad (remember there’s 10,000 attendees in total) and that’s a testament to the great organization and flow that’s been designed into every process. Just watching the meals being served is fascinating to me (more about food in another post) and I can’t help but think about the kind of planning and orchestration that goes into this.

Things to Do (other than sessions)

By the looks of it, most people come to TechEd to listen to experts talk about various topics in sessions that have a mix of slides and demos. In general, I’ve attended the developer and business intelligence sessions and I am happy to say that most of them are demo-heavy and don’t use many slides at all. Outside that, there’s a lot going on at the conference that’s fun or valuable or both. Here’s a sampling of a few things:

Blogger Hub

I had no idea there would be such a thing, but this is something all conferences need to do. It has everything you need:

Blogger_Hub

Except the coffee and the laptop, but you can find both of them around TechEd as well. The best part about the blogger hub is that it’s located centrally, allowing me to keep tabs on stuff happening all around while blogging.

Kinect Boxing Bots

If you’ve played an FPS like Call of Duty, you have surely had moments when you thought “oh wouldn’t it be cool if you could play it with kinect and have your avatar replicate your motions on screen”? Well this is no Call of Duty – it’s a couple of simple boxing robots, but they are controlled by people via Kinect! This robot came towards me and pulled off a few punches in the air when I was taking this picture (I took this towards the end of the day, to avoid the crowds).

IMG_0531

TechExpo

This is one of my favorite things to do everyday. I came in thinking I wouldn’t really want to talk to a bunch of sales people, but actually I was proved wrong. I didn’t really encounter the “sell” very much at all. Instead there was always someone technical, I could talk about specific scenarios. Two of the best conversations I can recall were with RedGate and ActiveBatch. What a great group of folks to talk to.

Swag

Not to mention the swag that frankly I’ve been trying to get as *little* of as possible and stick to only the good stuff I can either use or give away. But there isn’t that much of it here, which I find great actually. In my opinion, swag is simply a waste of money so I’d rather see less of it (do you really need another Styrofoam tube with seizure-inducing multi-colored flicker?)

There’s one kind of swag I do not refuse and in fact specifically ask for: books: hard-copies, eBooks, coupons for books, whatever. So far I’ve gotten books on .Net performance profiling, Agile development with TFS and a whole bunch of eBooks from others, all of which are going to straight to either the folks in my office and/or the user group. Love it.

TechExpo

Arcade (what?!)

Yes there’s a whole retro arcade at TechEd, where you can play all kinds of stuff. I haven’t checked out any of the games and honestly, I have to wonder: where do these folks get the time :) I can barely get through all the technical stuff I want to do.

IMG_0545

And that’s a quick roundup of some things at TechEd. I know it’s not very organized, but hopefully you get a feel for what’s going on here outside the sessions.

Tuesday, June 12, 2012 11:16:00 PM (GMT Daylight Time, UTC+01:00)  #     |  Comments [0]  |  Trackback
# Monday, June 11, 2012

When I booked my travel (back in February!), I didn’t know that there was such a thing as TechEd 101, that I could attend on the pre-conference day, even though I hadn’t registered for the pre-conference. So I was hoping my plane would land a bit early and give me a chance to catch it anyway. So after I landed, I wasted absolutely no time and drove straight to the conference center.

Registration & Swag

After a quick self-checkin (just scan the barcode you were sent in the mail at a kiosk), walk over to get your badge and closing party wristband and then head over to pick up your materials. I was told by several folks to bring an extra suitcase just to carry stuff back from the conference. So I did, but I didn’t expect the swag collection to begin so soon. Once I picked up my materials, I walked over and asked a couple of different people (including people at the registration desk) whether there was a photography policy. After being assured that I was allowed to take whatever pictures and videos I wanted (great!), I arranged everything I got on a table.

Registration_Materials

Here’s what I got at registration (labeled):

  1. TechEd T-shirt (pick your size at the registration).
  2. Conference guide *and* mini-guide. Not sure why we need two but I am sure I’ll find out.
  3. Notepad.
  4. Badge and pen.
  5. Copies of Redmond and MSDN magazine.
  6. Closing party wrist band and info.
  7. A recycle bag for recycling conference stuff you don’t want. Nice.
  8. Promotional and marketing materials (there’s lots of these and the recycling bag is very thoughtful).
  9. An environmentally friendly water bottle that I know I am going to use a lot!
  10. The backpack that all this stuff was in.

Phew, I doubt anyone’s gonna get through all of it, but I actually scanned through most of it and all I will say is this: if there’s anything you want to read, it’s the conference guide. The mini-guide has basically the same information and is good to carry around in your badge for quick reference (hey, I just answered my question in #2 above).

Oh and study the layout of the venue for a bit so you won’t waste time running around. Cuz the place is huge. I mean, I’d heard all these stories from people about how big the event is and how much stuff happens in it, but when I walked into the conference center, I had no idea just how massive this was going to be. Are you ready? Here’s a picture of *just* the hands-on-labs section (that’s the sessions with the code HOL):

HOL

Hands On Labs

I actually talked to one of the folks helping out with the HOLs and learned there are approximately 400 machines there, all hooked up to virtual machines. And he mentioned that this time the number of attendees was capped at a much lower number than previous years. I shudder to think what they were like.

I learned that the good thing about the HOL this year compared to last year is that every machine has all the labs. I still had about an hour before close, so I sat down and did a quick HOL of PowerView in SQL Server 2012 (good stuff - if you are looking for self-service BI solutions, try this lab). I am here for mostly app development stuff but I’ve always loved visualization and analytics so BI is right up there at the top of my interests. I am looking forward to doing a lot more of HOLs in general.

I was told the place would be chilly (it wasn’t actually) but regardless, I do tend to get cold while sitting down so I was planning on buying a hoodie at the conference store. So I picked up one of these to wear during the conference :)

tech_2012_hoodie

The store has some other interesting stuff I’ll check out later (including lots of great books at 20% off). But it was closing time so I called it a day and headed to the hotel.

Tomorrow, I’ll blog about the first day of the conference, keynotes, food, sessions, booths and lots of other things.

Monday, June 11, 2012 4:14:06 AM (GMT Daylight Time, UTC+01:00)  #     |  Comments [0]  |  Trackback

As I fly through the air towards Orlando, I thought I’d make a quick blog post to talk about TechEd and the Academic Institution Meetup.

This is my first TechEd ever and I am starting to get into the high-absorption mode where I calm my mind and get prepared for a deluge of information. The schedule looks pretty intense, with something or the other scheduled every hour of the conference. I want to mention one specific one that I’ve had a small hand in promoting: The Academic Institution Meetup.

A while back, an awesome techie named Jessica posted a feeler on the myTechEd discussion group asking whether anyone was interested in meeting other techies that work in academia or higher-ed.  I was totally going to start a meetup myself when I found it and naturally it just got me very excited. So I said, sure, that’s a great idea!

And so did another one. And another. And another…And now we have nearly 70 people saying they will attend. Wow! If you haven’t chimed in: stand up and be counted!

Oh and Jessica went to the trouble of sending invites to as many people as she could, 9 people at a time (limit on the number of people myTechEd allows one to add to a meeting, sadly). And the TechEd organizers? They could have said, sorry but there’s no space, that everything’s full, that it’s too late and it’s too much trouble for a small group of people. But no, they went about finding us a space to meet *after hours* in the conference venue and kept us up to date via Twitter and email. Event planners everywhere have a thing or two to learn from these folks about being flexible and resourceful. Thank you!

And back to you. If you work in academia, you *don’t* want to miss this. Why?

Higher-ed is Different

Higher-ed, in my experience is very different from other organizations (and I’ve worked in a some others). The focus isn’t on the bottom-line, decisions aren’t typically driven by a desire to make lots of profit and I can’t ever remember anyone talking about results in the last quarter. All my planning and outlook is based on where I want the organization and my team to be in 3-5 years. If there’s one word to describe the culture, it would be nurturing.

If you are the type of the person that wants to work in a place whose mission is absolutely foundational to improving the lives of people and the world in general, you can’t do much better than higher-ed. I am not saying that working for a bank, car maker or software development company doesn’t change lives – it does. Still, there is something fundamental about education as a force for change that almost nothing can beat.

Not to mention, if you are working in a publicly-funded institution, like I am, you have a responsibility that is quite unique. So the experience of working in higher-ed tends to be very different and I, for one want to hear from you – what does it feel like to work in academia? What problems do you face that are unique and how do you deal with them? What technologies and processes work and what don’t? There are so many things I want to know.

Network & Learn...

The academic institution meetup is a great opportunity to network with folks working in similar areas, learn from them, and most importantly, make friends. I am bringing a power strip, as Rick suggested on Channel 9 :), an extension cable and *lots* of business cards (yeah I still believe in actual business cards).

…and Share!

I’ll be blogging about MSTechEd as I go, with two goals 1) to help memorize what I’ve experienced and 2) to share what I’ve learned so others may benefit. Hopefully there’ll be excellent wi-fi at the conference as well as in the hotel, but I’ll report on that too!

My next post will be from Orlando. See you there!

Monday, June 11, 2012 2:00:00 AM (GMT Daylight Time, UTC+01:00)  #     |  Comments [0]  |  Trackback
# Wednesday, March 14, 2012

image

Today one of the applications I was working on reached beta level. The project itself is an MVC 3 web application called MyRA, but I’d built a base library that handled lots of things from authentication and authorization, caching, data access, HTML helpers, client-side scripting, etc. The key words in that sentence are lots of things. Putting everything in one core library meant that developers were forced to either use the entire library or none of it.

As a result I split up the project into several projects: Core, Security, Data Access, WebApp, etc, updated the WebApp to use dependency injection with Ninject throughout and changed the project structure to make it easier to work with.

Project Structure and Subversion

Yes I am still using subversion until our GitHub Enterprise installation becomes available for serious use. But it already got me thinking about making project structure simpler. Plus, I’ve been tinkering with Continuous Integration lately and it made me realize that our current structure was going to make some things harder than necessary. So far our projects had used the repository layout described in the red bean book, which meant that every project had its own trunk, branches and tags:

+Project1
  + trunk
  + branches
  + tags

However, most of the time, in our environment, a project’s dependencies are used only by that project, and this became very cumbersome, so when I restructured the project, I chose a one branch-per-solution-version approach, which looks like this:

+ Solution1
  + trunk
    + Project1
    + Project2
  + branches
    + CrazyIdeaTryout
      + Project1
      + Project2

This actually makes things a lot easier, even if Project1 is shared across other solutions, because it makes the provenance of the project clear and developers understand that a branch of a project is meant to work against components in the same branch. That makes it difficult to try and compile version 1.0 of one project against a dependency meant for version 1.1, which, if the build succeeded, could cause subtle bugs to appear at runtime!

NuGet Package Restore

When I’d finished, the project wouldn’t build, because as part of the restructuring, I got rid of all 3rd party libraries that I’d added via NuGet. But as I now had a per-solution hierarchy, it made sense to not check-in the NuGet dependencies anymore and just have them be installed as needed. Enter NuGet Package Restore, which makes this as easy as right-click on solution, then click:

image

Once I’d done this, checking out and building a project becomes the following sequence: 1) right-click > TortoiseSVN > svn checkout. 2) Double click solution file (.sln) 3) Hit F5. That’s it. And since you can use NuGet to setup an internal repository, this takes care of internal dependency management. I can easily translate this into a continuous integration workflow in TeamCity.

Post-Build Events

Many developers don’t know that the project files that Visual Studio creates (the .csproj files) are actually build scripts just like Apache Ant or Nant. They are actually really easy to use, but if you want to get a quick start on it, I’d recommend watching the Introduction to MSBuild on PluralSight.

Once you’ve gotten past the fear of mucking with sacred files like .csproj (when I started out with .Net I was scared to touch these things), you’ll see that there’s a lot you can do with them. One underused (in my opinion) feature is the Build Events tab in project properties:

image

This came in handy today. You see, this app uses a bunch of Oracle libraries, which tend to crash if they aren’t linked and run against the specific version and type of Oracle client. In other words, if you build against Instant Client and run against Server, it won’t work. To solve this stupid issue, we bundle all the necessary Oracle native DLLs along with the Web App, which works. However, these Oracle libraries are not in my startup project. They are inside a folder in a dependent project. I need to copy over these files whenever the project is built, like this:

image

(Yes, the folder is called oral for Oracle Libraries. Try and focus Smile with tongue out) The project on the right is my startup project that references the project on the left, but because these DLLs aren’t managed libraries, I can’t link to them in the project. They are just expected to be in the bin folder at runtime.

Visual Studio 2010 doesn’t handle this. If you select the DLLs and set their build action, to Copy, you’ll get this in your output folder:

image

See that extra oral folder? That ensures that the DLLs don’t get picked up at runtime. Visual Studio will do this even if the .csproj file doesn’t ask for it:

image

The way to fix this is to add some post-build event commands. Since I could count on Visual Studio copying the DLLs along with the oral folder, all I need to do is:

  1. Move the DLLs from bin\oral to bin\
  2. Delete the bin\oral directory.

I need this to happen correctly no matter what build configuration is used, which means using the in-built macros in Visual Studio. This is what I ended up with:

REM The Oracle libraries in MyRA.DAL.Oracle\oral are copied over
REM along with the 'oral' folder, thus causing the DLLs to be not
REM picked up by the project. Move them to the right place.
move /y $(TargetDir)oral\*.dll $(TargetDir)

REM Then delete the 'oral' folder because we don't need it.
rmdir /q $(TargetDir)oral

Those $(TargetDir) symbols will be replaced with the absolute path to your project’s output directory. There are other macros that you can get to on the Build Events tab. Here’s what that editor looks like:

image

All this ultimately makes it into your project file, which when committed to source control, ensures that other developers get the same build that you do:

image

It’s really as simple as that.

Wednesday, March 14, 2012 9:40:30 PM (GMT Standard Time, UTC+00:00)  #     |  Comments [0]  |  Trackback
# Tuesday, March 6, 2012

[This is one of a series of posts about my experience building a live site with Umbraco 5.]

This is a crucial step. If you don’t setup your development environment right, you’ll find yourself back-tracking a lot.

Visual Studio

There is only one good way to develop with Umbraco 5 (U5) and that’s with Visual Studio (VS). U5 is designed to play very well with MVC 3, to the point that you can build your own MVC application that runs right alongside U5 and uses the U5 API for all or part of the content management. Now you can certainly create and edit everything you need to build your site using U5’s web interface, but doing so won’t get you the great intellisense and static code analysis with Razor and C#. In addition, if you are a developer, you can only step into the debugger if you are launch the website via VS.

It’s possible to install the U5 and run it out of VS as a Website project, but that takes a bit of work (we’ll do that later in this post). It’s easier to install U5 with WebMatrix and then move to VS and it has the advantage that you can simply erase the installation and start over quickly if something goes awry. So what follows is a slightly roundabout way of achieving the desired state.

Note: Since I wrote this, the Umbraco documentation folks have produced a much better installation guide that uses NuGet. Look here: http://our.umbraco.org/documentation/v501/Getting-Started/Installing-Umbraco-with-NuGet You should use that if you’re looking for a Visual Studio-only type experience, instead of the roundabout way I describe here.

Installation

  • Download Umbraco and unzip it into a folder called say U5RTM.
  • Right click on the U5RTM folder and select Open as a website with WebMatrix. This will open the site in WebMatrix and give you a link you can click on to launch the install process:

    image

SQL CE, SQL Express or SQL Server?

At this point you have to choose between different databases. This comes down to personal preference. I prefer to work with SQL Express and export the data (via Tasks > Export Data in SQL Server Management Studio) to my test/staging/production environment which runs a full-blown SQL Server. Using SQL Express also let’s me do things with SSIS. However, a lot of people like the fact that backing up a SQL CE database is as easy as copy pasting a .SDF file.

I am going to use SQL Express here, which involves providing a full connection string to the database server. Follow through with creating an administrator account and you should be able to get into the site.

If everything works well, you are now ready to make this a VS project.

Setting up the project in Visual Studio

This step used to be time consuming. You see, Visual Studio project files are really build scripts for a tool called MSBuild, so they have all this metadata about what type of project something is and a list of every single resource that needs to be included into the compilation or build process. Creating this by hand is a pain, so it wasn’t worth doing.

Now, however, there is this really nice pre-configured project on Our created by Sebastiaan that comes with a project and a solution file. Follow those instructions, but instead of downloading U5 as it says on that page, simply copy paste all the sources from the U5RTM folder into the Umbraco.Web folder. I called my solution folder Website, and both projects that you get from Sebastiaan’s package are in it.

Now when you open the project in VS, you may find that some files and/or folders are not included in the project. This happens because Sebastiaan’s original project only includes the original sources that you get when you download U5, not the additional folders/files U5 creates during installation. If you’d like to include these resources, simply right click on the files/folders and click Include in Project. At this point, this is how your project structure will look (ignore the App_code folder which you won’t have – we’ll get to it in a future blog post):

image

Now when you launch the project, U5 will start off just like any MVC 3 project.

What about source control?

Ok I lied about something. Before even copying anything into my Website folder, I setup Git as my source control system. In true GitHub fashion, I added a Readme.md file that contains some information about the project. At this point, Here’s how my directories look: 

image
image

(Yes I am using ReSharper. You don’t need it to develop with U5, but if you aren’t using it, you’re really spending a lot more time than you need to on coding.)

Where’s the connection string?

After all this, if you open up Web.config in Umbraco.Web to take a look at the connection string, you won’t find it. That’s because U5’s connection string is actually part of the provider that provides database capabilities (more on that in a future blog post). The configuration for that is located in the HiveConfig folder as shown below:

image

Notice that connection.config? That’s a file you won’t have by default, because that’s something I added. In order to ensure that I don’t accidentally commit the connection string to the source control system, I extracted the connection string out using the configSource property and added the connection.config file to .gitignore. Here’s how that looks:

image

And the connection.config then looks like this:

image

At this point, I push the entire folder into my master repository and take a deep breath, knowing that all my work is safe from catastrophe.

That’s it. Now you can launch the project from VS, debug, create partials and views inside VS, etc and U5 and VS will play nicely with each other.

Next time, I’ll talk about how to think like an Umbracian.

UPDATE (with correction!):

@andythompson asked me 1) what my .gitignore looks like and 2) how I publish with this method.

.gitignore

Here’s my .gitignore, in its entirety:

/_ReSharper.Umbraco.Web/
/_ReSharper.Website/

That’s a list of directory paths separated by a new line.

BUT WAIT, THERE’S MORE: The way git works, you need to put a .gitignore file in every directory where you have assets that you want git to ignore, unless you are planning to commit the entire source tree every time. I actually did this, but totally forgot about it when I posted this update. When I went back to developing on my site, I realized to my horror that I’d forgotten to mention this key point. Apologies! So here are my other .gitignore files:

image

The .gitignore in App_Data looks like this:

# Don't need logs in repo
Logs/
# This keeps track of minification, etc and is on a per-machine basis.
ClientDependency/

and the one in App_Data/Umbraco/HiveConfig looks like this:

# This file contains connection strings so it should not be committed to the repository.
connection.config
Publishing

The way I publish this site is by doing the simply right-click publish onto FTP/local file system. The environment I use doesn’t support WebDeploy so I have to do it this way. As for the database, I used the Export task in SQL Server Management Studio:

image

Yeah it’s not the smoothest process, but I am moving towards setting up continuous integration via TeamCity so at some point this will happen hands-free on every commit.

Tuesday, March 6, 2012 4:06:55 AM (GMT Standard Time, UTC+00:00)  #     |  Comments [1]  |  Trackback
# Saturday, March 3, 2012

[This is one of a series of posts I am doing about my experience building a live site with Umbraco 5.]

If you’re one of the few people actually reading this blog, you know that I love Content Management Systems and that I’ve worked with a fair number of them. Last time, I recommended a fantastic CMS called Concrete 5.

“But”, you say, “I don’t know PHP. I’ve spent all my career in C# and .Net and I don’t want to learn yet another language ecosystem.”

[Soapbox: If you want my advice (hey it’s my blog right?), you should learn at least one new language every year. You don’t have to master a new language every year (good for you if you do!), but you should at least try. Not sure where to start? Why not order a sampler?]

Now I do learn new languages all the time. I am on sort of a functional kick right now and learning Ruby and F#. But I still prefer to code in C#, ASP.Net MVC. So when I took on the task of rebuilding IU’s Web & Multimedia Community website, I decided it was going to be in C#/.Net. Of course, a big factor that dictated this choice was that the current hosting environment where we generously were given free resources supports only ColdFusion and .Net. The current site is built with ColdFusion/Mach II, which I know nothing about and I didn’t want to go through the trouble of shifting providers, changing DNS entries, blah blah…not interesting. So .Net it is.

Considerations

I tried several different free Web software, not all of them CMSes. I am not going to go into detail explaining all the issues I had with all of them, but here’s a quick run down of what I tried and how it felt:

DotNetNuke Community Edition
Developing with DNN made me understand what a developer’s hell might feel like. It’s clunky, it has an incomprehensible model and nothing makes sense. At first, I thought it was me, because I hate WebForms, so I had another dev who loves SharePoint (I know) try it out. After struggling for a week, even *he* could not understand how the hell one went about customizing anything. I cannot understand why this thing is so popular. Fail.

Subtext
It’s not a CMS and is really just for blogging. That’s fine. I could have made this work because most of the stuff I needed on the community site had to do with publishing news and events. And the only reason I even tried it is because Phil Haack, who I have great respect for (I’ve never met him but I read his blog) is the main main on the project. However, I tried to install on SQLExpress using WebMatrix and it died with an error which I could not even comprehend. Fail.

N2
I went to download this after I heard good things about it on StackOverflow. On N2’s front page, it says:

Using it's interface is intuitive and empowering. The developer story is something exquisite.

Wow, that sounds amazing. But then I looked at the screenshots and sadly, it doesn’t look intuitive and empowering to me at all. It just looks clunky. Perhaps the developer experience is fantastic, but if the CMS builder didn’t care about how their own site looked, visual design must be pretty low on the priority list, right? Fail.

Orchard
This is a decent CMS and so long as you are willing to give your CMS create table perms and watch the number of tables explode as you install modules, you’ll do fine with it (Concrete 5 does the same). It has outstanding documentation for an open-source product (cuz MS is behind it I guess). But I find Orchard’s theming architecture to be needlessly complicated. MVC already has all the infrastructure one needs to create theming: layouts, sections, functions, helpers, partials…now I have to learn widgets, layers, zones, shapes, placement files? Yikes. Still, I am trying to build the .Net UG site with it, because I was able to download, install and start adding custom content in less than 5 minutes and also because I don’t care about doing custom development with it. I am going to simply download modules for whatever I need and so far it looks like this strategy will work. Pass.

Umbraco
Which brings me to Umbraco. The first time I tried Umbraco 5 was on #umbweekend, when Umbraco 5.0 was being put through the final touches before release. After I downloaded it, it didn’t work when I tried to install it without admin rights on SQL CE. But it worked just fine on SQL Express and the response I got on JabbR when I asked a couple of questions (very friendly and positive) made me want to continue.

I really struggled with Umbraco for the first couple of days. I just didn’t know where to start or what to do. There were no docs, no videos, no blog posts, no forum answers. I downloaded the source code, set it up in Visual Studio, stepped through it in the debugger line by line and really tried to understand it. The fact that Umbraco’s data model is really a meta-model doesn’t make things easy. After a couple of days, I had this weird feeling that I had lots of different pieces that were somehow waiting to be connected. It was very frustrating, I couldn’t sleep and all my brain kept thinking about was connecting these pieces. I even started dreaming about Umbraco code.

I was almost about to give up and that’s when I finally ran into CodeGarden 2011. I can’t remember exactly which presentation it was, (I think it was Deep Dive into Jupiter) but there was this one moment when *BAM* my confusion just disappeared and suddenly I knew what I had to do.

Since then, working with Umbraco has been, I am happy to report, just plain fun. Just a few hours of working with Umbraco convinced me that this will be the only .Net CMS I’ll need for a long time.

Now the people who build Umbraco are awesome and the product is great, but the documentation is really terrible, mostly because there’s only so much a small group of people can do. Since the Umbraco HQ decided to make March documentation month, I am going to do my part. This is part 1 in a series of posts I am going to do documenting my experience with building a technical community website in Umbraco 5. I am starting from scratch. I have no affiliation with Umbraco and never even tried it before v4, but I am an experienced MVC 3 developer and I have a few CMSes under my belt so this shouldn’t be too hard.

I assume you are primarily a developer that knows C# and ASP.NET MVC. I will try my best to link to concepts and resources as I go along but I won’t provide much explanation for things like Linq, Razor, or how C# dynamic objects work.

I haven’t figured out much about Umbraco – it’s a deep product (i.e. lots of deep thought has gone into it) so it will be a while before I understand significant parts of it, but I am sure there will always be first timers like myself who are struggling with Umbraco as I was. So I hope these blog posts will help someone.

Next time, I’ll talk about how to setup the development environment.

.net | cms | umbraco
Saturday, March 3, 2012 4:39:10 AM (GMT Standard Time, UTC+00:00)  #     |  Comments [0]  |  Trackback
# Wednesday, February 15, 2012

The first time I ever wrote any .Net code was in June 2009, when I started working at Indiana University. My first job was to replace an MS Access application that performed batch processing by representing each data flow task as a form and invoked the data flow task in the form’s Form Load method. Yeah.

Since then, I’ve migrated almost entirely to Web development and along the way there were many new and exciting things to learn. In the process of learning and working on new projects though, I always wondered: how do these experts like Jon Skeet know so much about the code? Why is that some of these MVPs seem to know how the code works, in what order things get called, how they treat exceptional cases and so on?

Answer: They read source code*. And the way you read source code for a DLL is by using a decompiler.

Decompilers, as the word implies, take compiled code and produce source code from it. And one particular decompiler kept being referenced by the experts: Reflector. It seems (seemed?) to be the most popular of them all (even Jon Skeet says in his book that it’s his weapon of choice). Of course, if you are just looking to explore what the IL for a piece of code looks like, you could just use LinqPad (example inspired by C# in Depth!):

linqpad

But I can’t read that and it’s not very interesting to me. Sadly, turns out Reflector is no longer free, so I looked around for free alternatives and narrowed it down to two: ILSpy and dotPeek. Both generate C# code from DLLs.

ILSpy doesn’t need installing. Just unzip. This is everything:

image

So how do you use it? Well just load an assembly, double click on a type and watch ILSpy produce C# code. Really, it’s that simple. Here’s what you get when you do that (I picked a simple class from ASP.Net MVC):

image

Wow, types, names and declarations, embedded strings. How about we expand some of the collapsed sections:

image

Ah so that’s what MvcForm does when disposed: write out a form closing tag to the output stream. But how it does write out the opening tag? Well, if you recall, in MVC you don’t start an MVCForm by instantiating it directly, you use HtmlHelper.BeginForm. That’s probably in FormExtensions (in keeping with the good practice of putting extension methods in a class named <TypeForWhichExtensionsAreBeingProvided>Extensions). So let’s go there now:

image 

Oh my. A whole bunch of overloads that all call the same method with nulls and default arguments (obviously not using C#’s optional parameters). And at the end, (and this is important), the actual logic for what happens when you call Html.BeginForm. But wait a minute..what’s that EndForm extension method?

image

Huh, it actually duplicates some of the logic that’s in MvcForm.Dispose, a violation of DRY! In my code, I try very hard to avoid breaking DRY, not just because it makes things easier to maintain, but also because I want to give other developers preferably only one way of doing things (yes I don’t care for the Perl TMTOWTDI nonsense, in case you were wondering). For example, if I had my way, I would force people to always put Forms in using blocks. Regardless, it’s immaterial how I feel. What’s important is: you get to see what the designers of this API were doing and it makes you think about whether there’s a better way to do it.

Suddenly, instead of libraries being these untouchable, forbidden artifacts, they become tangible, imperfect pieces of work by other human beings, just like you and me.

So after I poked around ILSpy a bit, I decided to give dotPeek a try. Extract the folder and…

image

WOW there is a LOT of stuff in there. Looks like it comes with a lot of ReSharper DLLs (and you’ll see why). Startup dotPeek, browse to the same type (FormExtensions) and you get:

image

Everything is expanded by default and it looks like Visual Studio. In fact, while poking around, I totally forgot that I was looking at decompilation output and hit ALT+Enter to fix some code! It really makes you forget that you are not looking at the original source. If that’s not enough, check this out:

image

Yep, that is the absolutely fantastic navigation support you all know and love in Resharper. Yan can also do find usages or paste a stack trace and browse through the types in it. This integration really won me over. Since the day I tried both ILSpy and dotPeek, I’ve pretty much been using dotPeek exclusively and can wholeheartedly recommend it.

So there you have it, a quick and dirty look at what decompilers can do for you. Every .Net developer should be using one. These are the artifacts of our craft and the only way we can get better is by learning from each other. What are you waiting for?

* Ok I don’t know for sure that Jon Skeet reads other people’s source code, but if you read C# in Depth, you’ll find enough evidence that he does use a decompiler to understand what .Net is doing under the covers.

Wednesday, February 15, 2012 3:24:32 AM (GMT Standard Time, UTC+00:00)  #     |  Comments [0]  |  Trackback