Basic Manipulation Event Handling in Windows Phone 7
June 1, 2010
Roscoe, N.Y.
A video display that responds to the touch of a single finger doesn't provide much more functionality than a mouse. But a display that responds to multiple fingers is potentially much more powerful in providing intimate interaction with visual objects.
In my MSDN Magazine article "Finger Style: Exploring Multi-Touch Support in Silverlight" I showed how a Silverlight 3 program can use low-level touch input to scale and rotate elements. At the time I thought it would be nice if this facility were provided in the framework itself.
And now it is! The Windows Presentation Foundation 4.0 introduced six events that begin with the word Manipulation that resolve multiple touch input on a single element into graphical transforms — translation, scaling, and rotation. I'll be discussing the WPF Manipulation events in upcoming articles in MSDN Magazine.
A subset of the Manipulation events is included in the documention of Silverlight 4, but the events are not available in Silverlight 4 applications. They are instead supported only in Silverlight for Windows Phone (hereafter known as SWP) for use in Windows Phone 7 applications.
There are a number of minor differences between the WPF and SWP implementations of the Manipulation events to make it extremely awkward for them to be discussed together. (For example, the WPF UIElement class has an IsManipulationEnabled property that must be set to true for the element to generate Manipulation events; this property is not present in SWP. Almost always, a WPF program will want to handle the IsManipulationStarting event; this event is not present in SWP. The WPF Matrix structure allows you to perform numerous powerful operations; the Silverlight Matrix structure is totally lame. And so forth.)
The two major differences between the WPF and SWP implementations of the Manipulation events are:
- The WPF Manipulation events resolve multi-touch input into translation, isotropic scaling, and rotation (optionally including one-finger rotation). SWP Manipulation events do not include rotation.
- The WPF Manipulation events optionally continue after the fingers leave the screen to provide an inertia effect. SWP does not include this feature. (Velocity information is provided, however, so it might be possible to implement inertia on one's own.)
The really really big difference is that the Manipulation events actually work in WPF. Even taking into account the inadequacy of the on-screen phone emulator to respond to touch in any useful manner, it's pretty obvious that the Manipulation events in the April Refresh of the Windows Phone 7 programming tools still need some work.
However, multi-touch is so important to Windows Phone 7 programming that it's worth the time and pain to get acquainted with these Manipulation events as early as possible. Some of the following discussion is based on my experience with the events in WPF coupled with a little (perhaps unjustified) faith that they will eventually work similarly in SWP.
The high-level interface to multi-touch in Silverlight for Windows Phone consists of three routed events defined by UIElement:
- ManipulationStarted
- ManipulationDelta
- ManipulationCompleted
As you might expect, a particular series of events begins with ManipulationStarted, followed by zero or more ManipulationDelta events, and then a single ManipulationCompleted event.
Each of these events is associated with its own event argument class (such as ManipulationStartedEventArgs). In addition, the Control class defines three protected virtual methods corresponding to these events but beginning with the word On (such as OnManipulationStarted).
The three event argument classes associated with these three events share the following properties (the first of which is defined by RoutedEventArgs):
- OriginalSource of type object, get-only
- ManipulationContainer of type UIElement, get-only
- ManipulationOrigin of type Point, get-only
- Handled of type bool, typical for routed events
It is my experience that the OriginalSource and ManipulationContainer are always the same, and indicate the top-most enabled element being touched. It is also my experience that once a manipulation begins (as indicated by the ManipulationStarted event) then subsequent ManipulationDelta and the final ManipulationCompleted event will indicate that same element, even if the fingers have drifted away from that element. In other words, there is an implicit capture of touch input.
It is crucial to understand that multiple fingers on a single element consitute a single manipulation. For example, put a finger on an element. A ManipulatedStarted event is fired, potentially followed by ManipulationDelta events as the finger moves. When the first finger is still on the screen, put another finger on that same element. No new ManipulatonStarted event is fired! But put a finger on another element, and a new ManipulationStarted event indicates a simultaneous manipulation of that other element.
In the ManipulationStarted event, the ManipulationOrigin property indicates the location of the finger relative to the ManipulationContainer element. As the finger moves, the subsequent ManipulationDelta events indicate a ManipulationOrigin also relative to that element, even if that element is at the same time being moved (as is often the case). But if two or more fingers are on a single element, then the ManipulationOrigin property indicates something like an average position of the multiple fingers.
If the user touches one finger on one element, and another finger on another element, those consititute two independent manipulations, each indicated by a ManipulationStarted event followed by ManipulationDelta events. The ManipulationContainer property accompanying the events indicates the particular element to which the event applies.
Low-level touch input is usually accompanied by a numeric ID value that can be associated with a particular finger. The ID value allows you to use a Dictionary to store state information for that finger and to track the finger as it moves. The Manipulation events don't require this ID. Multiple fingers on a single element are always consolidated into one sequence of Manipulation events; fingers on different elements are separate sequences of Manipulation events but can be distinguished by the ManipulationContainer property.
You might be thinking "But this is no good for my particular app. I need to track individual fingers even if they're touching the same element." In that case, you'll probably want to drop down to the low-level Touch.FrameReported event, which is not currently in Silverlight for Windows Phone but which I am assured will be restored. The Manipulation events are extremely useful but they are not solutions to every problem.
The real crux of the Manipulation events is two properties in ManipulationDeltaEventArgs:
- CumulativeManipulation of type ManipulationDelta
- DeltaManipulation of type ManipulationDelta
Yes, the ManipulationDelta event is accompanied by a ManipulationDeltaEventArgs object that has a DeltaManipulation property of type ManipulationDelta, the same name as the event. In addition, ManipulationCompletedEventArgs has a TotalManipulation property, also of type ManipulationDelta. ManipulationDelta has two properties:
- Translation of type Point
- Scale of type Point
These two properties indicate how the composite movement of one or more fingers on the element resolves into movement of the element itself and possible change in size. One finger can affect Translation; two fingers are required for Scale. The CumulativeManipulation property indicates the changes since the ManipulationStarted event. The DeltaManipulation property indicates the changes since the previous ManipulationDelta event.
The Translation factors are relative to the original position of the element at the time of the ManipulationStarted event even if the element is being moved. The Scale property provides multiplicative scaling factors that can be applied to the element. As you know, scaling is always relative to a center. That center is provided by the ManipulationOrigin property, which is relative to upper-left corner of the element.
As you put two fingers on an element and move them, the composite difference of the new location of the fingers from their original location is reflected in the Translation property. The distance between the two fingers relative to their original separation is reflected in the Scale property. The average location of the fingers relative to the upper-left corner of element is provided by ManipulationOrigin. In theory, this provides enough information to set a RenderTransform that translates and scales the element.
I can't show you how scaling works. Although I have two multi-touch displays (one on my desk and another on a laptop), and on both I run the April Refresh of the Windows Phone 7 tools and emulator under Windows 7, I have not been able to persuade a Silverlight app running on the phone emulator to recognize more than one finger at a time.
Moreover, the Scale property always reports values of (0, 0). The default values for no scaling should be (1, 1).
The other big issue I encountered was the interation between the Translation property and phone orientation. If the phone emulator is rotated on its side into landscape mode, the ManipulationOrigin property correctly indicates a coordinate relative to the new upper-left corner of the display, but the Translation property has values relative to the old upper-left corner of the portrait orientation! I'm sure this is wrong, but I've provided a fix so at least the programs here don't behave in a silly manner.
The source code for this blog entry is in a solution called BasicManipulation, which has two projects. The projects look the same: You can move red, green, and blue rectangles around the phone emulator display with your finger (on a multi-touch display running under Windows 7) or the mouse. The only difference is how I handle the events.
In the ElementHandledDemo project, three Rectangle objects are defined in MainPage.xaml:
<Rectangle Fill="Red"
Width="200"
Height="200"
HorizontalAlignment="Left"
VerticalAlignment="Top"
ManipulationStarted="OnRectangleManipulationStarted"
ManipulationDelta="OnRectangleManipulationDelta">
<Rectangle.RenderTransform>
<TranslateTransform />
</Rectangle.RenderTransform>
</Rectangle>
<Rectangle Fill="Lime"
Width="200"
Height="200"
HorizontalAlignment="Left"
VerticalAlignment="Top"
ManipulationStarted="OnRectangleManipulationStarted"
ManipulationDelta="OnRectangleManipulationDelta">
<Rectangle.RenderTransform>
<TranslateTransform X="100" Y="100"/>
</Rectangle.RenderTransform>
</Rectangle>
<Rectangle Fill="Blue"
Width="200"
Height="200"
HorizontalAlignment="Left"
VerticalAlignment="Top"
ManipulationStarted="OnRectangleManipulationStarted"
ManipulationDelta="OnRectangleManipulationDelta">
<Rectangle.RenderTransform>
<TranslateTransform X="200" Y="200" />
</Rectangle.RenderTransform>
</Rectangle>
Notice that the ManipulationStarted and ManipulationDelta events for each of the three Rectangle elements are assigned handlers, and that each of the elements also has its RenderTransform property set to a TranslateTransform object, which in two cases is initialized to provide a visual offset of the element. Changing the X and Y values of this TranslateTransform is certainly the easiest way for a Silverlight program to handle the Translation component of the ManipulationDelta event, but of course it's not adequate for scaling. (More on this issue at the end.)
In the MainPage.xaml.cs code-behind file, the ManipulationStarted handler simply brings the touched element to the foreground by resetting all the Canvas.ZIndex attached properties:
void OnRectangleManipulationStarted(object sender,
ManipulationStartedEventArgs args)
{
// Bring touched element to top
Panel pnl = (args.ManipulationContainer as FrameworkElement).Parent as Panel;
for (int i = 0; i < pnl.Children.Count; i++)
{
UIElement child = pnl.Children[i];
Canvas.SetZIndex(child,
child == args.ManipulationContainer ? pnl.Children.Count : i);
}
args.Handled = true;
}
The real action occurs in the ManipulationDelta handler. Translation should be handled simply by increasing the X and Y properties of the TranslateTransform by the X and Y properties of the Translation property of DeltaManipulation. However, in the current release, the points need to be effectively rotated by 90° or 180° depending on the orientation:
void OnRectangleManipulationDelta(object sender,
ManipulationDeltaEventArgs args)
{
// Set Transform of touched element
TranslateTransform xform =
args.ManipulationContainer.RenderTransform as TranslateTransform;
Point translation = args.DeltaManipulation.Translation;
switch (this.Orientation)
{
case PageOrientation.PortraitUp:
xform.X += translation.X;
xform.Y += translation.Y;
break;
case PageOrientation.PortraitDown:
xform.X -= translation.X;
xform.Y -= translation.Y;
break;
case PageOrientation.LandscapeLeft:
xform.X += translation.Y;
xform.Y -= translation.X;
break;
case PageOrientation.LandscapeRight:
xform.X -= translation.Y;
xform.Y += translation.X;
break;
}
args.Handled = true;
}
The Manipulation events are routed events, which means they travel up the visual tree until some element sets the Handled property of the event arguments to true. This means that handlers don't have to be set on the Rectangle events. Instead, the page itself can handle the events by overriding its OnManipulationStarted and OnManipulationDelta methods. That's the approach taken by the PageHandledDemo project. The XAML file is simpler because no events need be defined, but the code is just a little more complex because the two methods need to check if ManipulationContainer is really a Rectangle.
Another approach would be to create a class derived from UserControl called ManipulableElement (for example) and have that class override its own OnManipulationStarted and OnManipulationDelta methods to perform its own manipulation.
If scaling was working, how would I deal with it? I don't know. In WPF, it's convenient to set the RenderTransform property of the manipulated elements to a MatrixTransform. In the ManipulationDelta handler, the Matrix property of the MatrixTransform is accessed and subjected to method calls, including ScaleAt to apply a new scaling factor to the matrix based on the ManipulationDelta.Scale factors and the ManipulationOrigin center.
Of course, the Silverlight version of Matrix doesn't have those methods. (There are times I think Silverlight should have been called "WPF for Babies.") The Silverlight Matrix structure doesn't even support a multiplication operator. In order to make the math work right, I think we'll have to implement our own matrix multiplication logic. It certainly won't be the first time I've had to do this in Silverlight.
I'm tempted to try to add some intertia, but the latency of touch input is so bad on the phone emulator that I'm not sure I'd be able to tell if it's working correctly. (Whatever "correctly" means under these conditions.)