Rash thoughts about .NET, C#, F# and Dynamics NAV.


"Every solution will only lead to new problems."

Category .NET

In dieser Kategorie geht es um das Microsoft .NET Framework und damit zusammenhängende Technologien wie CLR, C#, ASP.NET, WCF, WF und WPF.

Sunday, 11. January 2009


How I do Continuous Integration with my C# / F# projects – part IV: Adding a documentation

Filed under: C#,English posts,F#,Tools,Visual Studio — Steffen Forkmann at 12:46 Uhr

In the last 3 posts I show how to set up a Continuous Integration environment for F# or C# projects with Subversion (part I), TeamCity (part II) and NUnit (part III).

This time I want to show how we can set up an automated documentation build.

Installing and using GhostDoc

“GhostDoc is a free add-in for Visual Studio that automatically generates XML documentation comments for C#. Either by using existing documentation inherited from base classes or implemented interfaces, or by deducing comments from name and type of e.g. methods, properties or parameters.”

[product website]

GhostDoc is one of my favorite Visual Studio plugins. It allows me to generate comments for nearly all my C# functions. Of course these generated comments aren’t sufficient in every case – but they are a very good start.

Unfortunately GhostDoc doesn’t work for F# 🙁 – the actual version works for C# and the support for VB.Net is called “experimental”.

Download and install http://www.roland-weigelt.de/ghostdoc/.

Now you should be able to generate XML-based comments directly in your C# code:

Using GhostDoc

Generated XML-comment

The next step is to activate the xml-documentation in your Visual Studio build settings:

image

Commiting these changes and adjusting the build artifacts will produce the input for the documentation build:

Adjust the build artifacts

Build artifacts

Using Sandcastle to generate a documentation

“Sandcastle produces accurate, MSDN style, comprehensive documentation by reflecting over the source assemblies and optionally integrating XML Documentation Comments. Sandcastle has the following key features:

  • Works with or without authored comments
  • Supports Generics and .NET Framework 2.0
  • Sandcastle has 2 main components (MrefBuilder and Build Assembler)
  • MrefBuilder generates reflection xml file for Build Assembler
  • Build Assembler includes syntax generation, transformation..etc
  • Sandcastle is used internally to build .Net Framework documentation

[Microsoft.com]

Download and install “Sandcastle – Documentation Compiler for Managed Class Libraries” from Mircosoft’s downloadpage or http://www.codeplex.com/Sandcastle.

For .chm generation you also have to install the “HTML Help Workshop“. If you want fancy HTMLHelp 2.x style (like MSDN has) you need “Innovasys HelpStudio Lite” which is part of the Visual Studio 2008 SDK.

“HelpStudio Lite is offered with the Visual Studio SDK as an installed component that integrates with Visual Studio. HelpStudio Lite provides a set of authoring tools you use to author and build Help content, create and manage Help projects, and compile Help files that can be integrated with the Visual Studio Help collection.”

[MSDN]

Last but not least I recommend to install the Sandcastle Help File Builder (SHFB) – this tool gives you a GUI and helps to automate the Sandcastle process.

“Sandcastle, created by Microsoft, is a tool used for creating MSDN-style documentation from .NET assemblies and their associated XML comments files. The current version is the May 2008 release. It is command line based and has no GUI front-end, project management features, or an automated build process like those that you can find in NDoc. The Sandcastle Help File Builder was created to fill in the gaps, provide the missing NDoc-like features that are used most often, and provide graphical and command line based tools to build a help file in an automated fashion.”

[product homepage]

After the installation process start SHFB to generate a documentation project:

Sandcastle Help File Builder

Add the TestCITestLib.dll to your project and add nunit.framework.dll as a dependency. Now try to compile your help project – if everything is fine the output should look something like this:

Generated Help

Setting up the documentation build

One of the main principles of Continuous Integration is “Keep the Build Fast” – so I am working with staged builds here. The documentation build should only be started if the first build was successful and all UnitTests are positive. For most projects it is enough to generate the documentation daily or even weekly.

First of all we have to create a simple MSBuild file which executes the SHFB project:

<Project ToolsVersion="3.5" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <!-- 3rd Party Program Settings -->
  <PropertyGroup>
    <SandCastleHFBPath>c:\Program Files (x86)\EWSoftware\Sandcastle Help File Builder\</SandCastleHFBPath>
    <SandCastleHFBCmd>$(SandCastleHFBPath)SandcastleBuilderConsole.exe</SandCastleHFBCmd>
    <SandCastleHFBProject>HelpProject.shfb</SandCastleHFBProject>
  </PropertyGroup>

  <Target Name="BuildDocumentation">
    <!-- Build source code docs -->
    <Exec Command="%22$(SandCastleHFBCmd)%22 %22$(SandCastleHFBProject)%22" />
  </Target>
</Project>

Add this build file and the SHFB project to your Visual Studio solution folder and commit these changes.

Put the build settings into the solution folder

Now we can create a new TeamCity build configuration:

Create a new build configuration

Take the same Version Control Settings like in the first build but use MSBuild as the build runner:

Take the MSBuild runner

We want the documentation to be generated after a successful main build so we add a “dependency build trigger”:

image

Now we need the artifacts from the main build as the input for our documentation build:

Set up artifacts dependencies

Be sure you copy the artifacts to the right directory as given in your .shfb-project. Now run the DocumentationBuild – if everything is fine the DocumentationBuild should give you the Documentation.chm as a new artifact:

image

Tags: , , , , , , , , , ,

Thursday, 8. January 2009


How I do Continuous Integration with my C# / F# projects – part III: Running automated UnitTests

Filed under: C#,English posts,F#,Tools,Visual Studio — Steffen Forkmann at 19:13 Uhr

In the last two posts I showed how to set up a Subversion (part I: Setting up Source Control) and a TeamCity server (part II: Setting up a Continuous Integration Server).

This time I will show how we can integrate NUnit to run automated test at each build. TeamCity supports all major Testing Frameworks (including MS Test) but I will concentrate on NUnit here.

"NUnit is a unit-testing framework for all .Net languages. Initially ported from JUnit, the current production release, version 2.4, is the fifth major release of this xUnit based unit testing tool for Microsoft .NET. It is written entirely in C# and has been completely redesigned to take advantage of many .NET language features, for example custom attributes and other reflection related capabilities. NUnit brings xUnit to all .NET languages."

[product homepage]

Creating a TestProject

First of all download and install NUnit 2.4.8 (or higher) from http://www.nunit.org/.

Now we add a small function to our F# source code:

let rec factorial = function  
  | 0 -> 1
  | n when n > 0 -> n * factorial (n-1)
  | _ -> invalid_arg "Argument not valid"

This is the function we want to test. We add a new C# class library to our solution (e.g. “TestCITestLib” 😉 ) and add a reference to nunit.framework. Inside this new TestLibrary we add a TestClass with the following code:

namespace TestCITestLib
{
    using NUnit.Framework;

    [TestFixture]
    public class FactorialTest
    {
        [Test]
        public void TestFactorial()
        {
            Assert.AreEqual(1, Program.factorial(0));
            Assert.AreEqual(1, Program.factorial(1));
            Assert.AreEqual(120, Program.factorial(5));
        }

        [Test]
        public void TestFactorialException()
        {
            Program.factorial(-1);
        }
    }
}

To ensure the build runner is able to compile our solution we put the nunit.framework.dll near to our TestProject and commit our changes.

Adding Nunit.framework.dll

Configure TeamCity for UnitTesting

The next step is to tell TeamCity that the build runner should run our UnitTests:

Configure build runner for NUnit

If we now run the build we should get the following error:

UnitTest error during automated build

Our second test function failed, because we didn’t expect the System.ArgumentException. We can fix this issue by adding the corresponding attribute to the Testfunction:

[Test,
 ExpectedException(typeof(System.ArgumentException))]
public void TestFactorialException()
{
    Program.factorial(-1);
}0

Tests passed

Configure the build output

At this point we have a minimalistic Continuous Integration infrastructure. Every time someone performs a Commit on our repository a automated build will be started and the sources will be tested against the given UnitTests. Now we should concentrate on getting our build output – the artifacts. The term artifact is usually used to refer to files or directories produced during a build. Examples of such artifacts are:

  • Binaries (*.exe, *.dll)
  • Software packages and installers (*.zip, *.msi)
  • Documentation files (e.g. help files)
  • Reports (test reports, coverage reports, …)

At this time we are only interested in the binaries (this means CITestLib.dll). We can add the following artifact definition to our TeamCity project:

Configure artifacts in TeamCity

If we now rebuild our solution the build runner collects the configured artifacts and stores them with all build information:

Collected artifacts

Next time I will show how we can add more artifacts – e.g. an automated documentation.

Tags: , , , , , , , , ,

How I do Continuous Integration with my C# / F# projects – part II: Setting up a Continuous Integration Server

Filed under: C#,English posts,F#,Tools,Visual Studio — Steffen Forkmann at 14:50 Uhr

In the last post I showed how easy it is to install Subversion and how it can be integrated into Visual Studio 2008 via AnkhSVN. This time we will set up a Continuous Integration server and configure a build runner.

As a Continuous Integration Server I recommend JetBrains TeamCity. You can download the free professional edition at http://www.jetbrains.com/teamcity/.

Installing TeamCity

During the installation process TeamCity wants to get a port number. Be sure that there will be no conflict with other web applications on your server. I chose port 8085 – and my first build agent got this default settings:

Configure a Build Agent

In the next step you have to sign the License Agreement and to create an administrator account:

Set up administrator account

Creating a Project

Now you can create your project and set up the build configuration:

Create project on your TeamCity Server

Create CITest project

Create Build Configuration

Set up version control root

Setting up a build runner

For setting up specific build runners see the TeamCity documentation. For now I will use the “sln2008”-Build runner (the Runner for Microsoft Visual Studio 2008 solution files).

Runner for Microsoft Visual Studio 2008 solution files

Now add a build trigger. Whenever someone performs a Commit on the Subversion repository the server has to start a build.

Build trigger

Testing the build runner

After this step we have two options to start a build. The first one is by clicking the “Run”-button on the project website and the second is doing a checkin:

Triggering the build running with a checkin

After performing the Commit the pending build appears on the project website:

Pending build 

After 60 seconds (see my configuration above) the build is starting. After the build is complete one can see the results in different ways. The simplest is via the project website:

View build results

Of cause TeamCity gives you a lot of different notification and monitoring possibilities including mail, RSS feeds or System Tray Notifications.

Next time I will show how we can integrate UnitTesting in our automated build scenario.

Tags: , , , , ,

How I do Continuous Integration with my C# / F# projects – part I: Setting up Source Control

Filed under: C#,English posts,F#,Tools,Visual Studio — Steffen Forkmann at 14:31 Uhr

In this post series I will show how one can easily set up a Continuous Integration scenario for F# or C# projects with completely free products.

“Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.”

[Martin Fowler]

The first step for Continuous Integration is to set up a Source Control environment. For many good reasons I choose Subversion – some of them are:

  • Atomic commits
  • Rename/Move/Copy actions preserve the revision history
  • Directories are versioned
  • Multiple repository access protocols including HTTP and HTTPS
  • There is a nice Visual Studio integration (see below)
  • Last but not least: it is completely free 🙂
Source code version control with Subversion

All you need for setting up a complete Subversion environment is to download and install VisualSVN Server from http://www.visualsvn.com/.

“VisualSVN Server is a package that contains everything you need to install, configure and manage Subversion server for your team on Windows platform. It includes Subversion, Apache and a management console.”

[product homepage]

Now you can create user accounts and a repository ”CITest” in the VisualSVN Server management console.

Create a repository

VisualSVN Server management console

Subversion integration in Visual Studio 2008

Download and install AnkhSVN 2.0.x from http://ankhsvn.open.collab.net/.

“AnkhSVN is a Subversion SourceControl Provider for Visual Studio. The software allows you to perform the most common version control operations directly from inside the Microsoft Visual Studio IDE. With AnkhSVN you no longer need to leave your IDE to perform tasks like viewing the status of your source code, updating your Subversion working copy and committing changes. You can even browse your repository and you can plug-in your favorite diff tool.”

[product homepage]

Now you can add your C#/F#-solution to your CITest-repository:

  • Open the solution in Visual Studio 2008
  • Open “View/Repository Explorer” and add your repository to AnkhSVN
    • You can copy the URL from the VisualSVN Server management console (In my case this is https://omega:8443/svn/CITest/)
    • You also have to give AnkhSVN your Subversion login
  • Now click the right mouse button on your solution in the Solution Explorer and choose “Add Solution to Subversion”

Add Solution to Subversion

Don't forget to specify a Log Message

Now we can modify our solution and commit our changes via “Commit solution changes” in the Solution Explorer:

Commit Solution Changes

We can easily control our changes via AnkhSVN’s “Repository Explorer” and “History Viewer” in Visual Studio 2008:

Repository Explorer and History Viewer

If we do any changes in the Program.fs file, we can see a diff via the “Show changes” functionality:

Diff with AnkhSVN

If you don’t like the default Diff tool you might try WinMerge.

Next time I will show how to set up a Continuous Integration server.

Tags: , , , , , , ,

Monday, 10. November 2008


A immutable sorted list in F# – part II

Filed under: .NET 3.0,English posts,F#,Informatik,Theoretische — Steffen Forkmann at 13:55 Uhr

Last time I showed how the immutable set implementation in F# can be used to get a immutable sorted list. As a result of using sets, the shown version doesn’t support repeated items. This lack can be wiped out by using an additional dictionary (immutable "Map" in F#) which stores the count of each item.

At first I define two basic helper functions for the dictionary:

module MapHelper =
  let addToMap map idx = 
    let value = Map.tryfind idx map
    match value with
      | Some(x) -> Map.add idx (x+1) map
      | None -> Map.add idx 1 map
      
  let removeFromMap map idx = 
    let value = Map.tryfind idx map
    match value with
      | Some(x) -> 
         if x > 1 then 
           Map.add idx (x-1) map 
         else 
           Map.remove idx map
      | None -> map
      
open MapHelper

Now I can adjust my sorted list implementation:

// a immutable sorted list - based on F# set
type 'a SortedFList =
 {items: Tagged.Set<'a,Collections.Generic.IComparer<'a>>;
  numbers: Map<'a,int>;
  count: int}
   
  member x.Min = x.items.MinimumElement
  member x.Items = 
    seq {
      for item in x.items do            
        for number in [1..x.GetCount item] do
          yield item}
          
  member x.Length = x.count
  member x.IsEmpty = x.items.IsEmpty
  member x.GetCount item = 
    match Map.tryfind item x.numbers with
      | None -> 0
      | Some(y) -> y
    
  static member FromList(list, sortFunction) = 
    let comparer = FComparer<'a>.Create(sortFunction)        
    let m = list |> List.fold_left addToMap Map.empty
      
    {new 'a SortedFList with 
      items = Tagged.Set<'a>.Create(comparer,list) and
      numbers = m and
      count = list.Length}
      
  static member FromListWithDefaultComparer(list) = 
    SortedFList<'a>.FromList(list,compare)    
      
  static member Empty(sortFunction) = 
    SortedFList<'a>.FromList([],sortFunction)
      
  static member EmptyWithDefaultComparer() = 
    SortedFList<'a>.Empty(compare)            
       
  member x.Add(y) =     
    {x with 
      items = x.items.Add(y);
      numbers = addToMap x.numbers y;
      count = x.count + 1} 
    
  member x.Remove(y) =     
    if x.GetCount y > 0 then
      {x with 
        items = x.items.Remove(y);
        numbers = removeFromMap x.numbers y;
        count = x.count - 1}
    else
      x        
Tags: , , , ,

A immutable sorted list in F# – part I

Filed under: .NET 3.0,English posts,F#,Informatik,Theoretische — Steffen Forkmann at 12:06 Uhr

F# supports a powerful implementation of immutable lists (see Dustin’s introduction). But for my current work I needed a sorted list and of course I wanted it the F#-way, which means immutable. I didn’t want to reinvent the wheel so I asked my question in the hubFS forum.

It turned out (thanks to Robert) that F# uses red-black trees to give a fast implementation for the immutable set. This means that a set in F# stores all elements in a balanced binary tree and searching for an element costs only O(log n). As a side effect all elements can be visited in the underlying order, which means the set implementation gives a possibility for implementing a sorted list. The only limitation is that a set usually don’t store objects more than once.

The next problem I got, was that my sorted list needs a specific ordering function. The standard F# set implementation uses structural comparison to give an order, which is not the order I need. But Laurent mentioned the Set.Make function in the F# PowerPack. This function creates a new set type for the given ordering function. The only problem with Set.Make is that it was only designed for OCaml compatibility. But with this hint I was able to put all information together and got a nice immutable sorted list implementation for F#:

// wrap orderingFunction as Collections.Generic.IComparer
type 'a FComparer =
  {compareF: 'a -> 'a -> int}
  static member Create(compareF) =
    {new 'a FComparer with compareF = compareF}
  interface Collections.Generic.IComparer<'a> with
    member x.Compare(a, b) = x.compareF a b    
// a immutable sorted list - based on F# set
type 'a SortedFList =
 {items: Tagged.Set<'a,Collections.Generic.IComparer<'a>> }
   
  member x.Min = x.items.MinimumElement
  member x.Items = seq {for item in x.items do yield item}
  member x.Length = x.items.Count
  member x.IsEmpty = x.items.IsEmpty
    
  static member FromList(list, sortFunction) = 
    let comparer = FComparer<'a>.Create(sortFunction)
    {new 'a SortedFList with 
      items = Tagged.Set<'a>.Create(comparer,list)}
      
  static member FromListWithDefaultComparer(list) = 
    SortedFList<'a>.FromList(list,compare)    
      
  static member Empty(sortFunction) = 
    SortedFList<'a>.FromList([],sortFunction)
      
  static member EmptyWithDefaultComparer() = 
    SortedFList<'a>.Empty(compare)            
       
  member x.Add(y) =     
    {x with items = x.items.Add(y)}    
  member x.Remove(y) =     
    {x with items = x.items.Remove(y)}    

Please note that this implementation stores every item only once. Next time I will show a version which allows to store repeated items.

Tags: , , ,

Friday, 24. October 2008


Using PLINQ in F# – Parallel Map and Reduce (Fold) functions – part 2

Filed under: .NET 3.0,English posts,F#,Informatik,PLINQ — Steffen Forkmann at 18:00 Uhr

Last time I showed how it is possible to use parallel map and fold functions to compute the sum of all factorials between 1 and 3000. The result was a nearly perfect load balancing for this task on a two processor machine. This time I will derive a generic function that computes partial results in parallel and folds them to a final result.

Let’s consider our F# example:

let add a b = a + b  
let fac (x:bigint) = 
  [1I..x] |> List.fold_left (*) 1I
let sequential() =
  [1I..3000I]
   |> List.map fac
   |> List.fold_left add 0I

This is the same as:

let calcFactorialSum min max =
  [min..max] 
   |> List.map fac
   |> List.fold_left add 0I  
 
let f1() = calcFactorialSum    1I 2000I
let f2() = calcFactorialSum 2001I 2200I
let f3() = calcFactorialSum 2201I 2400I
let f4() = calcFactorialSum 2401I 2600I
let f5() = calcFactorialSum 2601I 2800I
let f6() = calcFactorialSum 2801I 3000I
 
let sequential2() =
  f1() + f2() + f3() + f4() + f5() + f6()

We spitted the summation into 6 independent tasks and computed the sum of the partial results. This has nearly no bearing on the runtime.

But with the help of PLINQ we can compute each task in parallel:

let asParallel (list: 'a list) = 
  list.AsParallel<'a>()

let runParallel functions = 
    ParallelEnumerable.Select(
      asParallel functions, (fun f ->  f() ) )
 
let pFold foldF seed (data:IParallelEnumerable<'a>)=
  ParallelEnumerable.Aggregate<'a,'b>(
    data, seed, new Func<'b,'a,'b>(foldF))
 

let calcFactorialsParallel() =
  [f1; f2; f3; f4; f5; f6]
    |> runParallel
    |> pFold add 0I

This time we build a list of functions (f1, f2, f3, f4, f5, f6) and run them in parallel. "runParallel” gives us back a list of the partial results, which we can fold with the function “add” to get the final result.

On my Core 2 Duo E6550 with 2.33 GHz and 3.5 GB RAM I get the following results:

Time Normal: 26.576s

Time Sequential2: 26.205s (Ratio: 0.99)

Time “Parallel Functions”: 18.426s (Ratio: 0.69)

Time PLINQ: 14.990s (Ratio: 0.56) (Last post)

Same Results: true

We can see that the parallel computation of the functions f1 – f6 is much faster than the sequential.

But why is the PLINQ-version (see last post) still faster? We can easily see that each partial function needs a different runtime (e.g. it’s much harder to calculate the factorials between 2800 and 3000 than between 2000 and 2200). On my machine I get:

Time F1: 8.738s

Time F2: 2.663s

Time F3: 3.119s

Time F4: 3.492s

Time F5: 3.889s

Time F6: 4.442s

The problem is that the Parallel Framework can only guess each runtime amount in advance. So the load balancing for 2 processors will not be optimal in every case. In the original PLINQ-version there are only small tasks, and the difference between each runtime is smaller. So it is easier to compute the load balancing.

But of course we can do better if we split f1 into two functions f7 and f8:

let f7() = calcFactorialSum    1I 1500I
let f8() = calcFactorialSum 1501I 2000I

So we can get a better load balancing:

Time F1: 8.721s

Time F7: 4.753s

Time F8: 4.829s

Time Normal: 26.137s

Time “Parallel Functions”: 16.138s (Ratio: 0.62)

Same Results: true

Tags: , , , , , ,

Thursday, 23. October 2008


Using PLINQ in F# – Parallel Map and Reduce (Fold) functions – part 1

Filed under: .NET 3.0,C#,F# — Steffen Forkmann at 18:25 Uhr

If your wondering how Google computes query results in such a short time you have to read the famous “MapReduce”-Paper by Jeffrey Dean and Sanjay Ghemawat (2004). It shows how one can split large tasks into a mapping and a reduce step which could then be processed in parallel.

With PLINQ (part of the Parallel Extensions to the .NET Framework) you can easily use “MapReduce”-pattern in .NET and especially F#. PLINQ will take care of all the MultiThreading and load balancing stuff. You only have to give PLINQ a map and a reduce (or fold) function.

Lets consider a small example. Someone wants to compute the sum of the factorials of all integers from 1 to 3000. With List.map and List.fold_left this is a very easy task in F#:

#light
open System

let add a b = a + b
let fac (x:bigint) = [1I..x] |> List.fold_left (*) 1I

let sum =
  [1I..3000I]
    |> List.map fac
    |> List.fold_left add 0I

printfn "Sum of Factorials: %A" sum

Of course you could do much much better if you don’t compute every factorial on its own (I will show this in one of the next parts) – but for this time I need an easy function that is time consuming.

This simple Task needs 27 sec. on my Core 2 Duo E6550 with 2.33 GHz and 3.5 GB RAM.

But we can do better if we use parallel map and fold functions with help of PLINQ:

let pMap (mapF:'a -> 'b) (data:IParallelEnumerable<'a>) =
  ParallelEnumerable.Select(data, mapF)

let pFold foldF seed (data:IParallelEnumerable<'a>)=
  ParallelEnumerable.Aggregate<'a,'b>(
    data, seed, new Func<'b,'a,'b>(foldF))

Now we can easily transform our calculation to a parallel version:

let sum =
  [1I..3000I].AsParallel<bigint>()
    |> pMap fac 
    |> pFold add 0I

Putting all together we can write a small test application:

#light 
open System
open System.Linq
open System.Diagnostics

let testRuntime f =
  let watch = new Stopwatch()
  watch.Start()
  (f(),watch.Elapsed)

let add a b = a + b
let fac (x:bigint) = [1I..x] |> List.fold_left (*) 1I

let list = [1I..3000I]

let pMap (mapF:'a -> 'b) (data:IParallelEnumerable<'a>)=
  ParallelEnumerable.Select(data, mapF)

let pFold foldF seed (data:IParallelEnumerable<'a>)=
  ParallelEnumerable.Aggregate<'a,'b>(
    data, seed, new Func<'b,'a,'b>(foldF))

let PLINQ() =
  list.AsParallel<bigint>()
    |> pMap fac
    |> pFold add 0I

let sequential() =
  list
   |> List.map fac
   |> List.fold_left add 0I

let (sumSequential,timeSequential) =
  testRuntime sequential
printfn "Time Normal: %.3fs" timeSequential.TotalSeconds

let (sumPLINQ,timePLINQ) =
  testRuntime PLINQ
printfn "Time PLINQ: %.3fs" timePLINQ.TotalSeconds

timePLINQ.TotalSeconds / timeSequential.TotalSeconds
  |> printfn "Ratio: %.2f"

sumSequential = sumPLINQ
  |> printfn "Same Results: %A"

On my machine I get the following results:

Time Normal: 27.955s

Time PLINQ: 15.505s

Ratio: 0.55

Same Results: true

This means I get nearly a perfect load balancing on my two processors for this task.

In part II I describe how one can compute a series of functions in parallel.

Tags: , , , , , , ,

Thursday, 16. October 2008


Debugging in Dynamics NAV 2009

Filed under: .NET 3.0,C#,Dynamics NAV 2009,msu solutions GmbH,Visual Studio — Steffen Forkmann at 13:41 Uhr

Claus Lundstrøm zeigt in einem schönen Blogpost wie man in NAV2009 den Code auf Seite der ServiceTier (also auch remote) debuggen kann – und zwar über Visual Studio 2008 direkt im generierten C#-Code. Mit dieser Variante ist man nicht mehr gezwungen das Debugging über den Classic-Client zu tun, sondern kann direkt aus dem Dynamics NAV RoleTailored-Client debuggen.

Dummerweise ist der generierte C#-Code, wie das bei generiertem Code eigentlich immer der Fall ist, nicht gerade “optisch schöner” C#-Style und hat auch nur noch wenig mit dem Original-C/AL-Code zu tun – ist aber immerhin lesbar.

Das ist ein wirklich interessanter Ansatz und erlaubt mit etwas Geschick auch UnitTesting für NAV 2009. Dafür werde ich demnächst mal versuchen ein kleines Beispiel zu bloggen.

Tags: , , , , ,

Tuesday, 14. October 2008


Technologie Highlights von Heute und Morgen! – Umfrage auf der BASTA! 2008

Filed under: C#,F#,Veranstaltungen — Steffen Forkmann at 9:02 Uhr

Florian Mätschke hat soeben seine auf der BASTA! 2008 in Mainz angekündigte Umfrage auf seinem Blog veröffentlicht. Dabei wurden die BASTA!-Speaker zu der Technologie befragt, die sie im Moment am meisten fasziniert. “Gewinner” ist übrigens Silverlight 2 geworden, dicht gefolgt von funktionaler Programmierung (in F# bzw. LINQ) – wofür ich mich übrigens auch entschieden habe.

Insgesamt ist das Umfrageergebnis, wie für die BASTA! zu erwarten war, sehr .NET-lastig. Obwohl auf der Abendveranstaltung noch Technologien wie Waschmaschine und Auto als faszinierend erachtet wurden, haben sich die meisten Speaker schlussendlich für ihr Vortragsthema im weitesten Sinne entschieden.

Ich muss sagen, dass ich das Konzept der Umfrage sehr interessant finde. Das Problem ist nur, dass man z.B. auf einer Java-Konferenz natürlich vollkommen konträre Ergebnisse erzielt. Um die wirklichen “Technologie Highlights“ zu ermitteln müsste man die Umfrage selbstverständlich viel größer und anonymisiert anlegen.

https://lamigliorefarmacia.com
Tags: , , , , ,