While I've been heads down in work, Ralf released the first version of dotnet.tv in English: Web Services.
« May 2004 | Main | July 2004 »
While I've been heads down in work, Ralf released the first version of dotnet.tv in English: Web Services.
Posted at 06:35 AM | Permalink | Comments (0) | TrackBack (0)
I'm currently living in performance-testing land for a distributed application. This time, it's not enough to merely know the time it takes to execute certain functionality, but I also need to know which toll a certain feature has on all machines involved (client, server, DB server).
I've therefore created another small helper class which can be used to report the exact CPU usage ("% Processor Time" in perfmon.exe) between two points in time for your process.
static void Main(string[] args) { CPUMeter mtr = new CPUMeter(); // do some heavy stuff double result = 0; for (int i = 0;i<100000000; i++) { result = result+Math.Sin(i); } double usage = mtr.GetCpuUtilization(); Console.WriteLine("Done. CPU Usage {0:#00.00} %", usage); Console.ReadLine(); }
using System.Diagnostics; public class CPUMeter: IDisposable { CounterSample _startSample; PerformanceCounter _cnt; /// Creates a per-process CPU meter instance tied to the current process. public CPUMeter() { String instancename = GetCurrentProcessInstanceName(); _cnt = new PerformanceCounter("Process","% Processor Time", instancename, true); ResetCounter(); } /// Creates a per-process CPU meter instance tied to a specific process. public CPUMeter(int pid) { String instancename = GetProcessInstanceName(pid); _cnt = new PerformanceCounter("Process","% Processor Time", instancename, true); ResetCounter(); } /// Resets the internal counter. All subsequent calls to GetCpuUtilization() will /// be relative to the point in time when you called ResetCounter(). This /// method can be call as often as necessary to get a new baseline for /// CPU utilization measurements. public void ResetCounter() { _startSample = _cnt.NextSample(); } /// Returns this process's CPU utilization since the last call to ResetCounter(). public double GetCpuUtilization() { CounterSample curr = _cnt.NextSample(); double diffValue = curr.RawValue - _startSample.RawValue; double diffTimestamp = curr.TimeStamp100nSec - _startSample.TimeStamp100nSec; double usage = (diffValue / diffTimestamp) * 100; return usage; } private static string GetCurrentProcessInstanceName() { Process proc = Process.GetCurrentProcess(); int pid = proc.Id; return GetProcessInstanceName(pid); } private static string GetProcessInstanceName(int pid) { PerformanceCounterCategory cat = new PerformanceCounterCategory("Process"); string[] instances = cat.GetInstanceNames(); foreach (string instance in instances) { using (PerformanceCounter cnt = new PerformanceCounter("Process", "ID Process", instance, true)) { int val = (int) cnt.RawValue; if (val == pid) { return instance; } } } throw new Exception("Could not find performance counter " + "instance name for current process. This is truly strange ..."); } public void Dispose() { if (_cnt!=null) _cnt.Dispose(); } }
Have fun!
Posted at 09:37 PM | Permalink | Comments (1) | TrackBack (0)
Florian Lazar on DTC Transactions and Windows XP SP2. I'll definitely need this more often than not.
Posted at 01:51 PM | Permalink | Comments (0) | TrackBack (0)
I've just posted the complete thinktecture TechEd Europe schedule (including Christian and Christian's sessions). Here's the subset of list of sessions I'll do:
Monday: 19:30 - 20:45 "Distributed Applications - Today and in the Future"
The Netherland's .NET User Group (.NET Gebruikersgroep Nederland)
(Right before this session, my friend Juval Lowy from IDesign talks about Enterprise Services. Rumors are that Christian Weyer will also be around at that .NET User Group Meeting.
It's definitely going to be "Distributed Application Night" over there --- TechEd attendance is not required and this meeting is of course free!)
Tuesday: 16:30 - 17:45 [CHT024] Remoting versus Enterprise Services
In this Chalk-&-Talk, we'll discuss the trade-offs when selecting a distributed application technology for your applications today. We'll discuss implications and requirements as performance, scalability, and security and provide an outlook towards future technologies in this space.
Friday: 16:15 - 17:30 [DEV401] Building Extensible Applications using Attributes, Reflection and Code Generation (Ingo Rammer)
Wouldn't it be great to extend your applications with scripting and plug-ins? This would allow having several versions or customer specific versions, or even the possibility for power users to extend the application. In this session Ingo Rammer will demonstrate how you can use .NET technologies (Attributes, Reflection and CodeDom) to realize these plug-ins and scripting capabilities to extend the application.
Posted at 11:53 AM | Permalink | Comments (0) | TrackBack (0)
Will you be in Amsterdam at TechEd? thinktecture will be there and we'll cover a lot of aspects for distributed applications, globalization and application extensibility. I am really looking forward to meeting you there! --Ingo
19:30 - 20:45 "Distributed Applications - Today and in the Future" (Ingo Rammer)
(Right before this session, our friend Juval Lowy from IDesign talks about Enterprise Services. Rumors are that Christian Weyer will also be around at that .NET User Group Meeting.
It's definitely going to be "Distributed Application Night" over there --- TechEd attendance is not required and this meeting is of course free!)
16:30 - 17:45 [CHT024] Remoting versus Enterprise Services (Ingo Rammer)
In this Chalk-&-Talk, we'll discuss the trade-offs when selecting a distributed application technology for your applications today. We'll discuss implications and requirements as performance, scalability, and security and provide an outlook towards future technologies in this space.
18:15 - 19:30, [CHT010] Design Choices in Distributed Applications (Christian Nagel)
There are many choices with the design of distributed solutions: should I use a DataReader, or the DataSet? Should the communication to the components be done by using .NET Remoting or ASP.NET Web Services? Or the old protocol DCOM? For the user interface, what are the advantages of using Windows Forms compared to ASP.NET? There is not a clear choice that you should always prefer one technology to another one. Every technology has advantages and disadvantages that will be discussed here, so you can select the technologies that fit best for your solutions.
18:15 - 19:30, [CHT011] And You Thought You Knew about Web Services?! (Christian Weyer, Beat Schwegler, Terry Leeper)
Many people apply Web Services techniques in a RPC-based fashion. But this doesn't justify the use of Web Services standards. Web Services are about messaging and messaging is all about the message! Come and join us to discuss what actually makes Web Services 'service like', how you can apply service-oriented principles with existing Web Services platforms as well as the motivation for the up-coming WS specifications! Yes, we care about messages... how about you?
14:45 - 16:00, [CHT011] (Repeat) And You Thought You Knew about Web Services?! (Christian Weyer, Beat Schwegler, Terry Leeper)
Many people apply Web Services techniques in a RPC-based fashion. But this doesn't justify the use of Web Services standards. Web Services are about messaging and messaging is all about the message! Come and join us to discuss what actually makes Web Services 'service like', how you can apply service-oriented principles with existing Web Services platforms as well as the motivation for the up-coming WS specifications! Yes, we care about messages... how about you?
Tech Ed Party
Not that we'd have too much to do with that one. But we'll definitely be there ;-)
08:30 - 09:45, [DEV319] Creating Efficient, High Performance XML Web Services (Christian Weyer)
A lot of customers have one same dream: improving Web services performance. XML-based Web services are adhered to the mysterium of being 'somewhat slower' than other communication means and cannot scale out well. This session tries to clear up some of the prejudices and leads you through a series of measures to improve the overall responsiveness of your ASP.NET Web services. Key to this is to understand the architecture of ASMX Web services and the anatomy of a Web service request from both the client and server-side perspectives. See a set of key Web service design considerations followed by essential Web service performance and scalability issues. Learn how to configure, tweak and program for the best results you can get out of the ASMX engine today.
16:15 - 17:30, [DEV404] Creating a Klingon Culture - More about Globalization and Resource Management (Christian Nagel)
.NET has great built-in support to internationalize and globalize applications. More than that, .NET allows extending the localization support. This session demonstrates how the localization support can be extended using the Klingon culture. You will see how to adapt a calendar, create a custom resource reader to read resources from the database, define a custom format output for your classes, and more. You can use the techniques shown here to create a new culture that is not part of the framework, and create sub-cultures for small local regions. Looking at how to extend the Framework you will also see the relationships of the different classes so you can work with globalization and resource management very efficiently. This session also covers .NET 2.0 features for globalization and localization.
16:15 - 17:30 [DEV401] Building Extensible Applications using Attributes, Reflection and Code Generation (Ingo Rammer)
Wouldn't it be great to extend your applications with scripting and plug-ins? This would allow having several versions or customer specific versions, or even the possibility for power users to extend the application. In this session Ingo Rammer will demonstrate how you can use .NET technologies (Attributes, Reflection and CodeDom) to realize these plug-ins and scripting capabilities to extend the application.
Posted at 11:25 AM | Permalink | Comments (0) | TrackBack (0)
Scott has a great list of must-read resources for Internationalization. Including the differences in emoticons ... (^_^)
Posted at 08:12 AM | Permalink | Comments (0) | TrackBack (0)
I really love the managed code incarnation of the MSMQ API. Sending a message to a remote destination is as easy as the following:
String remoteQueueFormatName = @"DIRECT=OS:testsrv2\Private$\perf_server"; MessageQueue remoteQueue = new MessageQueue(@"FormatName:" + remoteQueueFormatName); Message m = new Message(); m.Body = "Foo"; m.Label = "Test"; que.Send(m);
But what happens if the destination machine is not online? Well, all messages will be queued in a local "outgoing queue" which can be seen in Computer Management->Services and Applications -> Message Queueing->Outgoing Queues:
You can also purge outgoing messages if you need to do so:
If you'd try to achieve the same behavior by running the following code in your application, you'll receive a very different result:
String remoteQueueFormatName = @"DIRECT=OS:testsrv2\Private$\perf_server"; MessageQueue remoteQueue = new MessageQueue(@"FormatName:" + remoteQueueFormatName); remoteQueue.Purge();
In this case, the remote computer will be contacted and the complete content of the remote queue will be purged! Not exactly what you tried to achieve ...
This used to be different. Whenever you opened a remote queue in the old-style MSMQ API (both, in the COM and the C-API), you had the possbility to specify whether you wanted to access the "real" remote queue on the remote machine, or whether you'd just like work with the matching outgoing queue which contains the messages which have not yet been delivered to the remote queue. This is not possible using System.Messaging.
Another missing piece in this API is related to the management of the disk-based queue files. Whenever you send a message with the flag Recoverable=true (or, in old-style API speak, with the delivery option MQMSG_DELIVERY_RECOVERABLE), it will be first stored to disk and will only be accepted for delivery afterwards. The client-side Send() call will in this case block until the file has been flushed to disk to make sure that the message survives an eventual computer crash or power-loss. If you on the other hand send a message using Recoverable=false (or MQMSG_DELIVERY_EXPRESS), the message will be transferred in memory, but might also end up on disk depending on the time it stays in the queue and depending on memory conditions. The message will however be lost as soon as you stop/start the MSMQ service or reboot the machine.
For this on-disk storage, MSMQ uses a number of files each of which is 4 MB in size. They are used to store one or more messages, depending on the message's sizes. (This is, by the way, also the reason for the original message size limit of 4 MB, as one message had to fit into one file.) These files are accessed as memory mapped files which are either flushed to disk immediately (if recoverable) or not (in express mode). You can find them in %SYSTEMROOT%\System32\msmq\storage (which is c:\windows\system32\msmq\storage on my machine).
If your application receives a queue's messages (or if you purge them), these files will not be deleted immediately, but MSMQ will only do so every once in a while (I guess I remember something like once every six hours, but this might have been changed.) Both the COM and C API allow you to tell MSMQ to TINY its message store to get rid of already deleted messages.
As this is not available in System.Messaging, I've created the following helpers to allow you access this additional functionality:
class MessagingHelpers { [DllImport("mqrt.dll", CharSet=CharSet.Unicode, ExactSpelling=true, PreserveSig = false)] static extern void MQMgmtAction(string machineName, string objectName, string action); [DllImport("mqrt.dll", CharSet=CharSet.Unicode, ExactSpelling=true, PreserveSig = false)] static extern void MQOpenQueue(String formatName, MQAccess access, MQShareMode sharemode, ref int queueHandle); [DllImport("mqrt.dll", ExactSpelling=true, PreserveSig = false)] static extern void MQCloseQueue(long queueHandle); [DllImport("mqrt.dll", ExactSpelling=true, PreserveSig = false)] static extern void MQPurgeQueue(long queueHandle); [Flags] enum MQAccess: uint { MQ_RECEIVE_ACCESS = 0x00000001, MQ_SEND_ACCESS = 0x00000002, MQ_PEEK_ACCESS = 0x00000020, MQ_ADMIN_ACCESS =0x00000080 } enum MQShareMode: uint { MQ_DENY_NONE = 0x00000000, MQ_DENY_RECEIVE_SHARE = 0x00000001 } public static void PurgeOutgoingQueueForRemoteQueue(String formatname) { int queueHandle=0; MQOpenQueue(formatname, MQAccess.MQ_ADMIN_ACCESS | MQAccess.MQ_RECEIVE_ACCESS, MQShareMode.MQ_DENY_NONE, ref queueHandle); MQPurgeQueue(queueHandle); MQCloseQueue(queueHandle); } public static void TidyLocalStorage() { MQMgmtAction(null, "MACHINE", "TIDY"); } }
(The secret here lies in MQ_ADMIN_ACCESS which tells MSMQ that you'd like to work with the "outgoing queue" for the specified remote queue.)
Usage sample:
class ClientApp { static void Main(string[] args) { String remoteQueueFormatName = @"DIRECT=OS:testsrv2\Private$\perf_server"; MessagingHelpers.PurgeOutgoingQueueForRemoteQueue(remoteQueueFormatName); MessagingHelpers.TidyLocalStorage(); } }
Please note that this functionality of the C API is by default only available if you are running on Windows Server 2003 or Windows XP. You can download an add-on, the "MSMQ Local Admin API" from Microsoft to provide similar functionality for Windows 2000 and NT 4 if you create applications with the C API.
Update: Interestingly enough, one of my first contacts with MSMQ was just about this very functionality (to access the outbound queues' states) in NT 4.0 back in 1998. Short answer: it wasn't possible at that time. I'll add a second weblog post later today to answer my own 1998's question with .NET.
Posted at 01:47 PM | Permalink | Comments (1) | TrackBack (0)
This might come in handy at some time for people using NLB clusters in test environments.
I run a test lab in my office [1] which allows me to perform certain performance and scalability tests on applications or parts. I routinely reconfigure these machines, for example, in Windows Network Load Balancing (NLB) clusters - or combinations of clusters - to reflect different scenarios. This time however, I decided to repave three of these machines to give me a known equal configuration for a critical performance test.
Oddly enough, after installing new copies of Windows Server 2003, and mapping the servers' drives to deploy an application, I noticed a really strange behavior. Errors like "Invalid Drive Specification" and "The target account name is incorrect" routinely happend in very interesting combinations. I was for example able to deploy my app to \\TESTSRV01\Deploy, but \\TESTSRV02\Deploy would fail in the same deployment script. A couple seconds later, I could deploy to TESTSRV02 but the connection to TESTSRV01 would fail with the same error message. They basically randomly worked and randomly refused to work -- the only certainity was that a single one would work at some time, but any other server wouldn't.
MSDN and friends pointed me towards synchronization conflics in an active directory, but I was 100% sure that this couldn't be the cause. It however led me to the assumption that my deployment client actually tried to connect to the servers with randomly incorrect security tokens ... or similar. But how could this be? The machines have been independently installed ... I didn't use any shared/ghosted images which could usually cause such things when not correctly SYSPREP'd.
At some point I decided to try to see if there's some conflict or similar on the LAN. I use DHCP, so I assumed that there wouldn't be any problems. However, pinging TESTSRV1 and TESTSRV2 revealed the impossible: both have been given the same IP address. As there weren't any warning about "IP address conflicts on your LAN" it started to dawn: there is only one reason -- they must use the same ethernet MAC address. But then again, that's basically impossible, right? Running IPCONFIG /ALL revealed the truth: both indeed had the same MAC.
And all of a sudden, it hit me right in the face: when you add machines to an NLB cluster in which each node has two NICs (one which is used for node-to-node traffic and one which is used as the external-facing interface), all external-facing NICs will receive the same virtual IP and MAC. When I formatted the machines, I didn't remove them from the cluster beforehand (after all, I destroyed the whole cluster anyway), so that they still continued to use their old virtual MACs. I somehow mistakenly assumed that this configuration is done at runtime, but instead it seemed that the changed MACs were persisted to the NICs flash memory. When the machines came back alive as non-clustered machines, all of them still used the same MAC and - rightly so - challenged the DHCP server, my windows client, and a few of my beliefs.
Lesson learnt!
[1] Yes, there was a time when people wondered how one person could possibly need more than ten PCs. But I guess most of my visitors are used to it nowadays.
Posted at 06:43 PM | Permalink | Comments (1) | TrackBack (0)