Showing posts with label Tips. Show all posts
Showing posts with label Tips. Show all posts

03 February 2016

Don't use curl in Dockerfiles

Today I broke a production server because a docker image had missing files. How could this happen? The build was green, the app was tested, docker built the image without any errors.
When I looked at the Dockerfile I found some curl commands to download additional .jar files directly from the internet. They contained some specialized logging logic that was not used during the tests, but made the application fail in production. It turned out that those files no longer exist on the internet. And that's the reason for my provocative blog titel:
curl will exit with code 0 even if the server responds with a 404 Not Found status code, because from a protocol point of view everything was correct. In this case scripts and dockerfiles will not fail but silently ignore failed downloads. To be honest, there is a --fail option for curl, but it's not fail safe and you have to know it. That's the reason why I now prefer wget for downloading files in scripts. It will by default return a non zero exit code if the download fails, and therefore failing the script. If you need to pipe the downloaded file to stdout you can do it with wget -O - <url> too.

01 November 2015

Fixing WiFi repeater issues on Ubuntu 15

I'm quite happy with my switch from Windows to Linux. Despite my initial fears of having to compile the kernel because of driver issues, my Asus notebook with Xubuntu 15 worked out of the box and even WiFi works like a charm. At home I have my workplace under the roof and therefore the signal strength is quite bad. So bought a cheap WiFi repeater and the internet was fast again. Unfortunately sometimes it get slow again, especially after waking up from sleep. After spending some time with iw* command line tools I finally found out what was going on.

It seemed that for some reason I got connected to the WiFi access point in the basement and not to the repeater. You could use the excellent wavemon tool to monitor the signal strength of all available access points. (sudo apt-get install wavemon && sudo wavemon)

As you can see in the screenshot, two access points with the same name (first column) but different BSSIDs (second column) are visible.

The fix was really easy even if I had to use the mouse for it  ;-) Click on your WiFi symbol and choose "Edit Connections ...". Then choose your WiFi connection and press the "Edit" button. Click on the "WiFi" tab. In the BSSID dropbox choose the one you want to connect with (The one with the strongest signal shown in wavemon). Don't forget to press "Save". That's it.



29 March 2015

Detailed ELB Latency Percentiles with Lambda

Amazon's Elastic Load Balancer (ELB) gives you only one latency metric, aggregated over all requests, and only with the usual min, max and average statistics. The value of this information is very poor. For example, when our first service went live in the Cloud, we monitored only the average latency, which was a very good and stable around 7ms. Later I found out that half of our traffic consists of OPTION requests (due to a CORS configuration error) which were handled in less than one millisecond, but users actually using the functionality of our service experienced a latency between 100ms and 800ms. Problem was that those users were only a few percent of the traffic, so the really important data was covered by noise and invisible inside the average. What we needed were url specific metrics and percentiles, especially the 99th ones.

Lambda to the rescue

As CloudWatch doesn't give us more detailed metrics we were on our own. Luckily, we can instruct the ELB to write its access logs every 5 minutes to an S3 bucket. All the raw data we need is already there. Tools like Graylog or the ELK stack could analyse them, but it takes some time to set up a pipeline that digests those logs continuously and produce the desired metrics. But AWS has a new service in its portfolio that helped us to get the desired data even faster: Lambda.
AWS Lambda is a service that runs your javascript code in response to events. One kind of event is the creation of an object in a s3 bucket, in our case every time the ELB writes its access log. The lambda javascript code runs inside nodejs, and AWS provides its complete API as a nodejs npm module. That gives us the possibility to read the access logs whenever they got written, calculate the percentiles we are interested in, and write them back to CloudWatch as custom metrics. As soon as we have our specialized ELB metrics available in CloudWatch we can visualize/graph them, create alarms and show them on our dashing dashboard that is already capable of integrating CloudWatch metrics. 
Data Flow

The Code

As a web developer I'm quite familiar with Javascript, but never really worked with nodejs before. Nevertheless, I started with the AWS lambda sample and was able to implement everything I want to do on one rainy saturday. And I could test everything locally in nodejs. Beautiful! 
I've created a repository with a simplified version of the ELB percentile to CloudWatch lambda to give you a quick start if you want to do similar stuff. You need to adjust the bucket names and group-by regex to your conditions. The complete logic lives in lambda.js, there are some local tests in test.js. The zip_lambda.sh creates the upload package.
Actually, I spent most of the time fighting with cross account access policies and setting up the correct lambda invocation and execution roles, because we are using a multiple account setup where logs and CloudWatch metrics are living in different accounts.  Another problem was the AWS CLI, I could not automate the lambda upload process and had to do it manually. The zip_lamda.sh creates the necessary command, but it never worked for me. When you create the lambda function, make the timeout big enough to be prepared for sunday evening traffic spikes ;-)

05 February 2014

Remove big files from Git repositories permanently

Everything is possible with Git but I can't remember all the command line options. So here is my aid to memory to make a repository slim again by removing unwanted files. I'm using git version 1.8.4 on Windows.
First you need to rewrite history:
git filter-branch --index-filter "git rm -r --cached --ignore-unmatch *.gem" --tag-name-filter cat -- --all
Note the –r and the use of wildcards inside the index-filter command. With the other options this means that all *.gem files in all commits and tags are found and removed. This command prints all objects its deleted. If it doesn't print anything useful you have made an error!
Now delete the backup created by git filter-branch:
rd /q /s ".git/refs/original"
Some magic to get rid of orphaned objects inside the git repository:
git reflog expire --expire=now --all
git gc --prune=now
Verify that all files are really gone with git log -- *.gem and then repack your repository.
git gc --prune=now --aggressive
Finally, push your shrinked repository to the origin.
git push origin --force
The next time you clone the repository you clone the repository you get the shrinked version.
Update: But as soon as you do a git pull (--rebase) all the unneeded and painfully removed objects are downloaded again to your hard disk. The only way to prevent this is by deleting the repository on GitHub and replacing it with the shrinked one (without changing names or urls). Astonishingly, existing clones continued to work with the replaced repository.
Update2: On GitHub itself is now a nice article explaining the process of cleaning/shrinking repositories, including a link to a tool called BFG Repo Cleaner that is specialized for this task.

20 November 2012

Understand and Prevent Deadlocks


Can you explain a typical C# deadlock in a few words? Do you know the simple rules that help you to write deadlock free code? Yes? Then stop reading and do something more useful.

If several threads have read/write access to the same data it is often often necessary to limit access to only on thread. This can be done with C# lock statement. Only one thread can execute code that is protected by a lock statement and a lock object. It is important to understand that not the lock statement protects the code, but the object given as an argument to the lock statement. If you don't know how the lock statement works, please read the msdn documentation before continuing. Using a lock statement is better than directly using a Mutex or EventWaitHandle because it protects you from stale locks that can occur if you forget to release your lock when an exception happens.

A deadlock can occur only if you use more than one lock object and the locks are acquired by each thread in a different order. Look at the following sequence diagram:



There are two threads A and B and two resources X and Y. Each resource is protected by a lock object.
Thread A acquires a lock for Resource X and continues. Then Thread B acquires a lock for Y and continues. Now Thread A tries to acquire a lock for Y. But Y is already locked by Thread B. This means Thread A is blocked now and waits until Y is released. Meanwhile Thread B continues and now needs a lock for X. But X is already locked by Thread B. Now Thread A is waiting for Thread B and Thread B is waiting for Thread A both threads will wait forever. Deadlock!

The corresponding code could look like this.

public class Deadlock
{
    static readonly object X = new object();
    static readonly object Y = new object();
   
    public void ThreadA()
    {
        lock(X)
        {           
            lock(Y)
            {
                // do something
            }
        }
    }

    public void ThreadB()
    {
        lock(Y)
        {
            lock(X)
            {
                // do something
            }
        }
    }
}

Normally nobody will write code as above with obvious deadlocks. But look at the following code, which is deadlock free:


public class Deadlock
{
    static readonly object X = new object();
    static readonly object Y = new object();
    static object _resourceX;
    static object _resourceY;

    public object ResourceX
    {
        get { lock (X) return _resourceX; }
    }

    public object ResourceY
    {
        get
        {
            lock (Y)
            {
                return _resourceY ?? (_resourceY = "Y");
            }
        }
    }

    public void ThreadA()
    {
        Console.WriteLine(ResourceX);
    }

    public void ThreadB()
    {
        lock(Y)
        {
            _resourceY = "TEST";
            Console.WriteLine(ResourceX);
        }
    }
}


But after re-factoring the getter for ResourceX to this

get { lock (X) return _resourceX ?? ResourceY; }

you have the same deadlock as in the first code sample!

Deadlock prevention rules


  1. Don't use static fields. Without static fields there is no need for locks.
  2. Don't reinvent the wheel. Use thread safe data structures from System.Collections.Concurrent or System.Threading.Interlocked before pouring lock statements over your code.
  3. A lock statement must be short in code and time. The lock should last nanoseconds not milliseconds.
  4. Don't call external code inside a lock block. Try to move this code outside the lock block. Only the manipulation of known private data should be protected. You don't know if external code contains locks now or in future (think of refactoring).

If you are following these rules you have a good chance to never introduce a deadlock in your whole career.



14 November 2012

How to find deadlocks in an ASP.NET application almost automatically


This is a quick how-to for finding deadlocks in an IIS/ASP.NET application running on a production server with .NET4 or .NET 4.5.

A deadlock bug inside your ASP.NET application is very ugly. And if it manifests only on some random production server of your web farm, maybe you feel like doom is immediately ahead. But with some simple tools you can catch and analyze those bugs.

These are the tools you need:
  • ProcDump from SysInternals   
  • WinDbg from Microsoft (Available as part of the Windows SDK (here is even more info)) 
  • sos.dll (part of the .NET framework)
  • sosext from STEVE'S TECHSPOT (Copy it into your WinDbg binaries folder)
ProcDump will be needed on the server where the deadlock occurs. All the other tools are only needed on your developer machine. Because WinDbg doesn't need any installation you can also prepare an emergency USB stick (or file share) with all the necessary tools.

If you think a deadlock occurred do the following:
  1. Connect to the Server
  2. Open IIS Manager 
  3. Open Worker Processes 
  4. Select the application pool that is suspected to be deadlocked
  5. Verify that you indeed have a deadlock, see the screenshot below
  6. Notice the <Process-ID> (see screenshot)
  7. Create a dump with procdump <Process-ID> -ma
    There are other tools, like Task Manager or Process Explorer, that could dump but only ProcDump is smart enough to create 32bit dumps for 32bit processes on a 64bit OS.
  8. Copy the dump and any available .pdb (symbol) files to your developer machine. 
  9. Depending on the bitness of your dump start either WinDbg (X86) or WinDbg (X64)
  10. Init the symbol path (File->Symbol File Path ...)
    SRV*c:\temp\symbols*http://msdl.microsoft.com/download/symbols
  11. File->Open Crash Dump
  12. Enter the following commands in the WinDbg Command Prompt and wait
  13. .loadby sos clr
  14. !load sosex
  15. !dlk
You should now see something like this:

0:000> .loadby sos clr
0:000> !load sosex
This dump has no SOSEX heap index.
The heap index makes searching for references and roots much faster.
To create a heap index, run !bhi
0:000> !dlk
Examining SyncBlocks...
Scanning for ReaderWriterLock instances...
Scanning for holders of ReaderWriterLock locks...
Scanning for ReaderWriterLockSlim instances...
Scanning for holders of ReaderWriterLockSlim locks...
Examining CriticalSections...
Scanning for threads waiting on SyncBlocks...
*** WARNING: Unable to verify checksum for mscorlib.ni.dll
Scanning for threads waiting on ReaderWriterLock locks...
Scanning for threads waiting on ReaderWriterLocksSlim locks...
*** WARNING: Unable to verify checksum for System.Web.Mvc.ni.dll
*** ERROR: Module load completed but symbols could not be loaded for System.Web.Mvc.ni.dll
Scanning for threads waiting on CriticalSections...
*** WARNING: Unable to verify checksum for System.Web.ni.dll
*DEADLOCK DETECTED*
CLR thread 0x5 holds the lock on SyncBlock 0126fa70 OBJ:103a5878[System.Object]
...and is waiting for the lock on SyncBlock 0126fb0c OBJ:103a58d0[System.Object]
CLR thread 0xa holds the lock on SyncBlock 0126fb0c OBJ:103a58d0[System.Object]
...and is waiting for the lock on SyncBlock 0126fa70 OBJ:103a5878[System.Object]
CLR Thread 0x5 is waiting at System.Threading.Monitor.Enter(System.Object, Boolean ByRef)(+0x17 Native)
CLR Thread 0xa is waiting at System.Threading.Monitor.Enter(System.Object, Boolean ByRef)(+0x17 Native)


1 deadlock detected.


Now you know that the managed threads 0x5 and 0xa are waiting on each other. With the !threads command you get a list of all threads. The Id Column (in decimal) is the managed thread id. To the left the WinDbg number is written. With  ~[5]e!clrstack command you can see the stacktrace of CLR thread 0x5. Or just use ~e*!clrstack to see all stacktraces. With this information you should immediately see the reason for the deadlock and start fixing the problem..


Deadlocked Requests visible in IIS Worker Process

Automate the Deadlock Detection

If you are smart, create a little script that automates step 2 to 7. We use this powershell script for checking every minute for a deadlock situation:


param($elapsedTimeThreshold, $requestCountThreshold)
Import-Module WebAd*
$i = 1
$appPools = Get-Item IIS:\AppPools\*
while ($i -le 5) {
ForEach ($appPool in $appPools){
 $count = ($appPool | Get-WebRequest | ? { $_.timeElapsed -gt $elapsedTimeThreshold }).count
 if($count -gt $requestCountThreshold){
$id = dir IIS:\AppPools\$($appPool.Name)\WorkerProcesses\ | Select-Object -expand processId
$filename = "id_" +$id +".dmp"
$options = "-ma"
$allArgs = @($options,$id, $filename)
procdump.exe $allArgs
 }
}
Start-Sleep -s 60
}


07 November 2012

Cooperative Thread Abort in .NET


Did you know that .NET uses a cooperative thread abort mechanism?

As someone coming from a C++ background I always thought that killing a thread is bad behavior and should be prevented at all costs. When you terminate a Win32 thread it can interrupt the thread in any state at every machine instruction so it may leave corrupted data structures behind.
A .NET thread cannot be terminated. Instead you can abort it. This is not just a naming issue, it is indeed a different behavior. If you call Abort() on a thread it will throw a ThreadAbortException in the aborted thread. All active catch and finally blocks will be executed before the thread gets terminated eventually. In theory, this allows the thread to do a proper cleanup. In reality this works only if every line of code is programmed in a way that it can handle a ThreadAbortException
in a proper way. And the first time you call 3rd-party code not under your control you are doomed.

Too make the situation more complex there are situations where throwing the ThreadAbortException is delayed. In versions prior to .NET 4.5 this was poorly documented but in the brand new documentation of the Thread.Abort method is very explicit about this. A thread cannot be aborted inside a catch or finally block or a static constructor (and probably not the initialization of static fields also) or any other constrained execution region.


Why is this important to you?

Well, if you are working in an ASP.NET/IIS environment the framework itself will call Abort() on threads which are executing too long. In this way the IIS can heal itself if some requests hit bad blocking code like endless loops, deadlocks or waiting on external requests. But if you were unlucky enough to implement your blocking code inside static constructors, catch or finally blocks your requests will hang forever in your worker process. It will look like the httpRuntime executionTimeout is not working and only a iisreset will cure the situation.

Download the CooperativeThreadAbortDemo sample application.

27 September 2012

Embed Url Links in TeamCity Build Logs

We at AutoScout24 are using TeamCity for our Continous Integration and Delivery. One step in our  Release Pipeline is integration testing in different browsers. If our test framework detects a problems it will create a screenshot of the breaking page. This screenshot contains valuable information that helps our developers to quickly analyze the issue. As an example, our test servers are configured to send stack traces which are visible in the screenshots.

But it seams that there is no way to include links in TeamCity build logs. TeamCity correctly escapes all the output from test and build tools so it is not possible to get some html into the log. So I investigated TeamCity extension points. Writing a complete custom html report seemed to be overkill because the test reporting tab worked really well. Writing a UI plugin means to learn Java, JSP and so on and as a .NET company we don't want to fumble around with the Java technology stack. But as a web company we know how to hack javascript ;-)

There is already a TeamCity plugin called StaticUIExtensions that allows you to embedd static html fragments in TeamCity pages. And because a script tag is a valid html fragment we could inject javascript into TeamCity. So I wrote a few lines of javascript that scan the dom for urls and transforms them into links. With this technique you get clickable links in all the build logs.


What you need to do:
  1. Install StaticUIExtensions 
  2. On your TeamCity server, open your "server\config\_static_ui_extensions" folder
  3. Open static-ui-extensions.xml
  4. Add a new rule that inserts "show-link.html" into every page that starts with "viewLog.html"
      <rule html-file="show-link.html" place-id="BEFORE_CONTENT">
        <url starts="viewLog.html" />
      </rule>
  5. Create a new file "show-link.html" with this content:
    <script>
      (function ($) {
        var regex = /url\((.*)\)/g
     
        function createLinksFromUrls() {
          $("div .fullStacktrace, div .msg").each(function () {
            var div = $(this);
            var oldHtml = div.html();
            var newHtml = oldHtml.replace(regex, "<a href='$1' target='_blank'>$1</a>");
            if (oldHtml !== newHtml) div.html(newHtml);
          });
        }
     
        $(document).ready(createLinksFromUrls);
        $(document).click(function () {
            window.setTimeout(createLinksFromUrls, 50);
            window.setTimeout(createLinksFromUrls, 100);
            window.setTimeout(createLinksFromUrls, 500);
        });
    })(window.jQuery); 
    
    </script>
    
This javascript searches for the url(*) pattern and replaces it with <a> tags.  Because TeamCity uses ajax to load the stacktraces when you expand a tree node, I used timers to delay the dom processing until the ajax call suceeded. Now you can log something like "url(http://www.autoscout24.de)" and this will be transformed to <a href="http://www.autoscout24.de">http://www.autoscout24.de</a>.

Voila, Mission completed.

10 August 2012

"Right" vs "Simple"

Compare these two Software Design philosophies:

MIT

Simplicity
The design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation.
Correctness
Correctness-the design must be correct in all observable aspects. Incorrectness is simply not allowed.
Consistency
The design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.
Completeness
The design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.

New Jersey

Simplicity
The design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.
Correctness
The design must be correct in all observable aspects. It is slightly better to be simple than correct.
Consistency
The design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency.
Completeness
The design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.
These philosophies are formulated by Richard Gabriel in "Worse is Better". Please read his article. I followed the first philosophy for many years before I learned the hard way the the second philosophy has a much higher success rate and if you programmed the right thing it will improve incrementally until it is much more right than anything designed with the first philosophy in mind. One last quote: "The right thing takes forever to design, but it is quite small at every point along the way. To implement it to run fast is either impossible or beyond the capabilities of most implementors."

29 March 2012

HTML5 without warnings in Visual Studio

Today I was annoyed about warnings that Visual Studio shows when editing an html5 file. Example: VS expects a type attribute inside the script tag but html5 doesn't require it anymore (because it defaults to javascript).

When opening the context menu I noticed the "Formatting and Validation" item and opened it:

html5_validate

Choosing "HTML5" as a target removes all those annoying wrong warnings :-)

27 November 2011

Speed up your build with UseHardlinksIfPossible

MsBuild 4.0 added the new attribute "UseHardlinksIfPossible" to the Copy task. Using Hardlinks makes your build faster because less IO operations and disk space is needed (= better usage of file system cache). What's best is that this new option is already be used by the standard .net build system! But Microsoft decided to turn them off by default.

After searching a little bit in the c# target files I found out how to turn this feature globally on and my build was 20% faster than before. And if you have big builds with more than a hundred projects this counts!

So here comes the way to turn on hard linking in your build. First, this works only with NTFS. Second you have to explicitly set the ToolsVersion to 4.0. You can do this with a command line argument (msbuild /tv:4.0) or inside the project file (<Project DefaultTargets="Build" ToolsVersion="4.0" ...).

Then you have to override the following properties with a value of "True":

  • CreateHardLinksForCopyFilesToOutputDirectoryIfPossible
  • CreateHardLinksForCopyAdditionalFilesIfPossible
  • CreateHardLinksForCopyLocalIfPossible
  • CreateHardLinksForPublishFilesIfPossible

Use command line properties (msbuild /p:CreateHardLinksForCopyLocalIfPossible=true) to override them for all projects in one build. Or you can create a little startup build file that collects all projects and set the properties in one place. Here is mine:

Code Snippet
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" >

  <PropertyGroup>    
    <Include>**\*.csproj</Include>
    <Exclude>_build\**\*.csproj</Exclude>
    <CreateHardLinksIfPossible>true</CreateHardLinksIfPossible>
  </PropertyGroup>

  <ItemGroup>
    <ProjectFiles Include="$(Include)" Exclude="$(Exclude)"/>
  </ItemGroup>

  <Target Name="Build" >
    <MSBuild Projects="@(ProjectFiles)" Targets="Build" BuildInParallel="True" ToolsVersion="4.0"
             Properties="Configuration=$(Configuration);
                         BuildInParallel=True;
                         CreateHardLinksForCopyFilesToOutputDirectoryIfPossible=$(CreateHardLinksIfPossible);
                         CreateHardLinksForCopyAdditionalFilesIfPossible=$(CreateHardLinksIfPossible);
                         CreateHardLinksForCopyLocalIfPossible=$(CreateHardLinksIfPossible);
                         CreateHardLinksForPublishFilesIfPossible=$(CreateHardLinksIfPossible);
                         " />    
  </Target>
  
</Project>

26 June 2011

Exception Logging Antipatterns

Here are some logging antipatterns I have seen again and again in real life production code. If your application has one global exception handler, catching and logging should be done only in this central place. If you want to provide additional information throw a new exception and attach the original exception. I assume that the logging framework is capable of dumping an exception recursively, that means with all inner exceptions and their stacktraces.

Catch Log Throw
catch (Exception ex)
{
    _logger.WriteError(ex);
    throw;
}

No additionally info is added. The global exception handler will log this error anyway, therefore the logging is redundant and blows up your log. The correct solution is to not catch the exception at all.

Catch Log Throw Other

catch (Exception ex)
{
    _logger.WriteError(ex, "information");
    throw new InvalidOperationException("information"); // same information
}
Same as Catch Log Throw, but now you have two totally unrelated log entries. Solution: use the InnerException mechanism to create a new exception and don't log the old one:
throw new InvalidOperationException("information", ex);

Log Un-thrown Exceptions
catch (Exception ex)
{
    var myException = new MyException("information");
    _logger.WriteError(myException);
    throw myException;
}

In this case an un-thrown exception is logged. This could cause problems, because the exception is not fully initialized until it was thrown. For example the Stacktrace property would be null. Solution: don't log, just attach the original exception ex to MyException:
throw new MyException("information", ex);

Non Atomic Logging
catch (Exception ex)
{
    _logger.WriteError(ex.Message);
    _logger.WriteError("Some information");
    _logger.WriteError(ex);
    _logger.WriteError("More information");
}

Several log messages are created for one cause. In the log they appear unrelated and can be mixed with other log message. Solution: Combine the information into one atomic write to the logging system: _logger.WriteError(ex, "Some information and more information");

Expensive Log Messages
{
    [...] // some code
    _logger.WriteInformation(Helper.ReflectAllProperties(this));
}
This one is really dangerous for your performance. An expensive log message is generated all the time even if the logging system is configured to ignore it. If you have expensive message, put the generation into an if block side by side with the logging statement:
if (_logger.ShouldWrite(LogLevel.Information))
{
    // do expensive logging here
    _logger.WriteInformation(Helper.ReflectAllProperties(this));

}
 

13 June 2011

Disable ASP.NET Development Server

I always forget how to stop the ASP.NET Development server from starting when I attach to the IIS for debugging, so here is the way to do it:
  1. Select the web or WCF project. Press F4 to show the property window. If only an empty window appears, repeat the process.
  2. Set the first property to "False".
Always_start_when_debugging
If your solution contain projects that start the ASP.NET Development Server you will enjoy my macro that sets this property solution wide:
Sub TurnOffAspNetDebugging()

    REM The dynamic property CSharpProjects returns all projects
    REM recursively. "Solution.Projects" would return only the top
    REM level projects. Use VBProjects if you are using VB :-).          
    Dim projects = CType(DTE.GetObject("CSharpProjects"), Projects)

    For Each p As Project In projects
        For Each prop In p.Properties
            If prop.Name = "WebApplication.StartWebServerOnDebug" Then
                prop.Value = False                
            End If
        Next
    Next
End Sub

Update: This solution no longer works for VS2012. An addin with the same functionality is available on GitHub: https://github.com/algra/VSTools 

16 February 2011

Visual Studio 2010 Javascript Snippets for Jasmine

Because Resharper 5 does not support live templates for Javascript I’m forced to use the built in VS2010 snippets. The default Javascripts snippets are located here:

%ProgramFiles%\Microsoft Visual Studio 10.0\Web\Snippets\JScript\1033\JScript

The ‘1033’ locale ID may be different for your country. I’m using the following snippets for creating Jasmine specs:

describe

<CodeSnippet Format="1.1.0" xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
  <Header>
    <Title>describe</Title>
    <Author>Christian Rodemeyer</Author>
    <Shortcut>describe</Shortcut>
    <Description>Code snippet for a jasmine 'describe' function</Description>
    <SnippetTypes>
      <SnippetType>Expansion</SnippetType>
    </SnippetTypes>
  </Header>
  <Snippet>
    <Declarations>
      <Literal>
        <ID>suite</ID>
        <ToolTip>suite description</ToolTip>
        <Default>some suite</Default>
      </Literal>
    </Declarations>
    <Code Language="jscript"><![CDATA[describe("$suite$", function () {
        $end$        
    });]]></Code>
  </Snippet>
</CodeSnippet>

it

<CodeSnippet Format="1.1.0" xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
  <Header>
    <Title>it</Title>
    <Author>Christian Rodemeyer</Author>
    <Shortcut>it</Shortcut>
    <Description>Code snippet for a jasmine 'it' function</Description>
    <SnippetTypes>
      <SnippetType>Expansion</SnippetType>
    </SnippetTypes>
  </Header>
  <Snippet>
    <Declarations>
      <Literal>
        <ID>spec</ID>
        <ToolTip>spec description</ToolTip>
        <Default>expected result</Default>       
      </Literal>    
    </Declarations>
    <Code Language="jscript"><![CDATA[it("should be $spec$", function () {
        var result = $end$       
    });]]></Code>
  </Snippet>
</CodeSnippet>

func

<CodeSnippet Format="1.1.0" xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
  <Header>
    <Title>function</Title>
    <Author>Christian Rodemeyer</Author>
    <Shortcut>func</Shortcut>
    <Description>Code snippet for an anonymous function</Description>
    <SnippetTypes>
      <SnippetType>Expansion</SnippetType>
      <SnippetType>SurroundsWith</SnippetType>
    </SnippetTypes>
  </Header>
  <Snippet>
    <Code Language="jscript"><![CDATA[function () {
        $selected$$end$
    }]]></Code>
  </Snippet>
</CodeSnippet>

10 February 2011

Removing the mime-type of files in Subversion with SvnQuery

If you add files to subversion they are associated with a mimetype. SvnQuery will only index text files, that means files without an svn:mime-type property or where the property is set to something like “text/*”. At work I wondered why I couldn’t find some words that I know must exist. It turned out that subversion marks files stored as UTF-8 files with BOM as binary, using svn:mime-type application/octet-stream. This forces the indexer to ignore the content of the file.

I used SvnQuery to find all files that are marked as binary, e.g. t:app* .js finds all javascript files and t:app* .cs finds all C# files. With the download button at the bottom of the results page I downloaded a text files with the results. Because svn propdel svn:mime-type [PATH] can work only on one file (it has no --targets option) I had to modify the text file to create a small batch script like this:

svn propdel svn:mime-type c:\workspaces\javascript\file1.js
svn propdel svn:mime-type c:\workspaces\javascript\file1.js
svn propdel svn:mime-type c:\workspaces\javascript\file1.js

After this change indexing worked again. I now run a daily query that ensures that no sources files are marked as binary.