佳为好友

原:UIImagePickerController实时抓取数据

/Files/walkklookk/AugmentedRealitySample.zip
创建:2010-04-07

[自:共有两种方法:]
参照:http://stackoverflow.com/questions/1317978/iphone-get-camera-preview

This one is also working quite good. Use it when the camera preview is open:

UIImage *viewImage = [[(id)objc_getClass("PLCameraController") 
                      performSelector:@selector(sharedInstance)]
                      performSelector:@selector(_createPreviewImage)];

But as far as I found out it brings the same results than the following solution which takes a 'screenshot' of the current screen:

extern CGImageRef UIGetScreenImage();

CGImageRef cgoriginal = UIGetScreenImage();
CGImageRef cgimg = CGImageCreateWithImageInRect(cgoriginal, rect);            
UIImage *viewImage = [UIImage imageWithCGImage:cgimg];    
CGImageRelease(cgoriginal);                
CGImageRelease(cgimg);  

A problem I didn't still find a fix for is, how can one get the camera image very fast without any overlays?


[其中第二种方式是比较保险的,因为苹果已经允许使用UIGetScreenImage函数.]
参照:http://www.tuaw.com/2009/12/15/apple-relents-and-is-now-allowing-uigetscreenimage-for-app-st/

Developers now can use private API for screen capture on iPhone, says Apple

As Apple seems to be lightening up and accepting more applications using private APIs (including Ustream and others that stream video from the iPhone 3G), word comes that the review team is now officially allowing the UIGetScreenImage() functionto be used in applications distributed in the App Store.

An Apple forum moderator stated in the developer forums:
"After carefully considering the issue, Apple is now allowing applications to use the function UIGetScreenImage() to programmatically capture the current screen contents." The function prototype is as follows:

CGImageRef UIGetScreenImage();
Apple also states "that a future release of iPhone OS may provide a public API equivalent of this functionality." It's also noted that "At such time, all applications using UIGetScreenImage() will be required to adopt the public API."

This function, which is a part of the Core Graphics framework, allows an application access to what's being currently being displayed on the screen. It's useful for things like capturing a screen shot, as our own Erica Sadun's BETAkit does to allow developers to send screen shots to a developer. It also allows streaming video from the iPhone camera, as an application like this captures what's being displayed on the screen from the camera, and records it or sends it somewhere.

What other features devs are hoping to see opened up? There's things like general calendar access, Core Surface, and XMPP and app-settable timers that developers would like to take advantage of in their SDK apps.

I hope this is a sign of what's to come for the iPhone SDK, and that we'll see more things like this opened up soon for App Store distribution.

[via the Apple Developer Forums, dev membership required]
+++++
+++++

[自:但是,仅仅使用UIGetScreenImage似乎有问题,可以通过下面的方法解决:(相关源代码参见附件)(PS:本网页需要使用代理才能访问.)]
参见:http://cmgresearch.blogspot.com/2010/01/augmented-reality-on-iphone-how-to_01.html

Augmented Reality on the iPhone - how to

When Apple released the 3.1 update to the iPhone operating system they added some extra properties to the UIImagePickerController allowing you to add your own camera overlay and hide the camera controls. Before this developers had to dive into the UIView hierarchy and hack things around as detailed here. Fortunately the new API is a lot simpler and quite a few applications have been released that take advantage of this to do some Augmented Reality. 

Unfortunately one thing that is still lacking is access to the real time video feed from the camera. This limits what you can currently do with the iPhone in terms of real time video processing. 

There have been various attempts to access the camera, unfortunately they all seem to fall outside of the public SDK so using them in an app destined for the app store is not possible. 

However, there was a recent announcement from Apple that they would be allowing applications to use the UIGetScreenImage API call which has previously been a private function. This opens up some possibilities for accessing the real time video feed. Unfortunately the function is an all or nothing screen grab - which makes it a bit difficult to draw data on top of the camera view and access the real time feed. 

Fortunately as you can see from the screen shot and the video you can still do some cool stuff. I've used it in my App Sudoku Grab to great effect. 

If you look carefully at the screenshot you'll see that I'm actually drawing to the screen using a checkerboard pattern - this gives a good enough image for the user to see, but still allows enough of the camera preview image to show through to be usable. Hopefully this blog post will provide enough information to get you started on implementing you own Augmented Reality app. I've attached a link to a sample application at the end of the post. 

So, how does it all work? 

The first thing that we need to do any useful image processing is a way to get at the pixels of an image. These two utility functions here give you that:
Image *fromCGImage(CGImageRef srcImage, CGRect srcRect) {
Image *result=createImage(srcRect.size.width, srcRect.size.height);
// get hold of the image bytes
CGColorSpaceRef colorSpace=CGColorSpaceCreateDeviceGray();
CGContextRef context=CGBitmapContextCreate(result->rawImage,
result->width,
result->height,
8,
result->width,
colorSpace,
kCGImageAlphaNone);
// lowest possible quality for speed
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
CGContextSetShouldAntialias(context, NO);
// get the rectangle of interest from the image
CGImageRef subImage=CGImageCreateWithImageInRect(srcImage, srcRect);
// draw it into our bitmap context
CGContextDrawImage(context, CGRectMake(0,0, result->width, result->height), subImage);
// cleanup
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(subImage);
return result;
}

CGImageRef toCGImage(Image *srcImage) {
// generate space for the result
uint8_t *rgbData=(uint8_t *) calloc(srcImage->width*srcImage->height*sizeof(uint32_t),1);
// process the greyscale image back to rgb
for(int i=0; iheight*srcImage->width; i++) {
// no alpha
rgbData[i*4]=0;
int val=srcImage->rawImage[i];
// rgb values
rgbData[i*4+1]=val;
rgbData[i*4+2]=val;
rgbData[i*4+3]=val;
}
// create the CGImage from this data
CGColorSpaceRef colorSpace=CGColorSpaceCreateDeviceRGB();
CGContextRef context=CGBitmapContextCreate(rgbData,
srcImage->width,
srcImage->height,
8,
srcImage->width*sizeof(uint32_t),
colorSpace,
kCGBitmapByteOrder32Little|kCGImageAlphaNoneSkipLast);
// cleanup
CGImageRef image=CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(rgbData);
return image;
}
The first function takes a CGImage and a region of interest and turns it into bytes that represent the pixels of the image. The second function reverses the process and will give you a CGImage from raw bytes. To make life a bit simpler I'm using a structure that packages up information about the raw image data:
typedef struct {
uint8_t *rawImage; // the raw pixel data
uint8_t **pixels; // 2D array of pixels e.g. use pixels[y][x]
int width;
int height;
} Image;

Image *createImage(int width, int height) {
Image *result=(Image *) malloc(sizeof(Image));
result->width=width;
result->height=height;
result->rawImage=(uint8_t *) calloc(result->width*result->height, 1);
// create a 2D aray - this makes using the data a lot easier
result->pixels=(uint8_t **) malloc(sizeof(uint8_t *)*result->height);
for(int y=0; yheight; y++) {
result->pixels[y]=result->rawImage+y*result->width;
}
return result;
}

void destroyImage(Image *image) {
free(image->rawImage);
free(image->pixels);
free(image);
}
Normally I would write something like this in C++ - but to keep things simple and allow the use of standard Objective-C I've stuck to straight C for this demo. 

We can now use these classes to start doing something useful. The first thing we are going to need is a view that can draw using the checkerboard mask.
- (id)initWithFrame:(CGRect)frame {
if (self = [super initWithFrame:frame]) {
// create the mask image
Image *checkerBoardImage=createImage(self.bounds.size.width, self.bounds.size.height);
for(int y=0;yheight; y+=2) {
for(int x=0; xwidth; x+=2) {
checkerBoardImage->pixels[y][x]=255;
}
}
for(int y=1;yheight; y+=2) {
for(int x=1; xwidth; x+=2) {
checkerBoardImage->pixels[y][x]=255;
}
}
// convert to a CGImage
maskImage=toCGImage(checkerBoardImage);
// cleanup
destroyImage(checkerBoardImage);
}
return self;
}

- (void)drawRect:(CGRect)rect {
// we're going to draw into an image using our checkerboard mask
UIGraphicsBeginImageContext(self.bounds.size);
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextClipToMask(context, self.bounds, maskImage);
// do your drawing here

////////
UIImage *imageToDraw=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

// now do the actual drawing of the image
CGContextRef drawContext=UIGraphicsGetCurrentContext();
CGContextTranslateCTM(drawContext, 0.0, self.bounds.size.height);
CGContextScaleCTM(drawContext, 1.0, -1.0);
// very important to switch these off - we don't wnat our grid pattern to be disturbed in any way
CGContextSetInterpolationQuality(drawContext, kCGInterpolationNone);
CGContextSetShouldAntialias(drawContext, NO);
CGContextDrawImage(drawContext, self.bounds, [imageToDraw CGImage]);

// stash the results of our drawing so we can remove them later
if(drawnImage) destroyImage(drawnImage);
drawnImage=fromCGImage([imageToDraw CGImage], self.bounds);
}
The line of code that does the clever stuff is here:
 CGContextClipToMask(context, self.bounds, maskImage);
That tells core graphics to use our checkerboard image as a clipping mask. As we've only set alternate pixels in the mask this will have the effect of filtering our drawing commands so they only show up on alternate pixels. You might be wondering why I'm drawing to an image and then drawing that to the screen - we'll be making use of the image in a bit. 

Now in our view controller where we launch the image picker we can use this view as the camera overlay:
-(IBAction) runAugmentedReality {
// set up our camera overlay view

// tool bar - handy if you want to be able to exit from the image picker...
UIToolbar *toolBar=[[[UIToolbar alloc] initWithFrame:CGRectMake(0, 480-44, 320, 44)] autorelease];
NSArray *items=[NSArray arrayWithObjects:
[[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemFlexibleSpace target:nil action:nil] autorelease],
[[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(finishedAugmentedReality)] autorelease],
nil];
[toolBar setItems:items];
// create the overlay view
overlayView=[[[OverlayView alloc] initWithFrame:CGRectMake(0, 0, 320, 480-44)] autorelease];
// important - it needs to be transparent so the camera preview shows through!
overlayView.opaque=NO;
overlayView.backgroundColor=[UIColor clearColor];
// parent view for our overlay
UIView *parentView=[[[UIView alloc] initWithFrame:CGRectMake(0,0,320, 480)] autorelease];
[parentView addSubview:overlayView];
[parentView addSubview:toolBar];

// configure the image picker with our overlay view
UIImagePickerController *picker=[[UIImagePickerController alloc] init];
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
UIImagePickerControllerSourceTypePhotoLibrary;
// hide the camera controls
picker.showsCameraControls=NO;
picker.delegate = nil;
picker.allowsImageEditing = NO;
// and put our overlay view in
picker.cameraOverlayView=parentView;
[self presentModalViewController:picker animated:YES];
[picker release];
// start our processing timer
processingTimer=[NSTimer scheduledTimerWithTimeInterval:1/5.0f target:self selector:@selector(processImage) userInfo:nil repeats:YES];
}
The important line of code here is:
 // and put our overlay view in
picker.cameraOverlayView=parentView;
This puts our view on top of the cameras screen. We can now start grabbing images from the screen using the UIGetScreenImage:
// this is where is all happens
CGImageRef UIGetScreenImage();

-(void) processImage {
// grab the screen
CGImageRef screenCGImage=UIGetScreenImage();
// turn it into something we can use
Image *screenImage=fromCGImage(screenCGImage, overlayView.frame);
CGImageRelease(screenCGImage);
// process the image to remove our drawing - WARNING the edge pixels of the image are not processed
Image *drawnImage=overlayView.drawnImage;
for(int y=1;yheight-1; y++) {
for(int x=1; xwidth-1; x++) {
// if we draw to this pixel replace it with the average of the surrounding pixels
if(drawnImage->pixels[y][x]!=0) {
screenImage->pixels[y][x]=(screenImage[y-1][x]+screenImage[y+1][x]+
screenImage[y][x-1]+screenImage[y][x+1])/4;
}
}
}
// do something clever with the image here and tell the overlay view to draw stuff
// simple edge detection and following:
CGMutablePathRef pathRef=CGPathCreateMutable();
int lastX=-1000, lastY=-1000;
for(int y=0; yheight-1; y++) {
for(int x=0; xwidth-1; x++) {
int edge=(abs(screenImage->pixels[y][x]-screenImage->pixels[y][x+1])+
abs(screenImage->pixels[y][x]-screenImage->pixels[y+1][x]))/2;
if(edge>10) {
int dist=(x-lastX)*(x-lastX)+(y-lastY)*(y-lastY);
if(dist>50) {
CGPathMoveToPoint(pathRef, NULL, x, y);
lastX=x;
lastY=y;
} else if(dist>10) {
CGPathAddLineToPoint(pathRef, NULL, x, y);
lastX=x;
lastY=y;
}
}
}
}
// update the overlay view
[overlayView setPath:pathRef];
//////////////

// finished with the screen image
destroyImage(screenImage);
}
For this example I'm doing some pretty basic edge detection and putting the edges into a CGPath. This is then drawn by the overlay view. 

The important lines of code are here. We ask the overlay view for the image it drew to the screen, then we go through each pixel to see if we drew to it and if we did replace it with the average of the surrounding pixels. This way we remove any artefacts from out drawing at the loss of some screen resolution:
 // process the image to remove our drawing - WARNING the edge pixels of the image are not processed
Image *drawnImage=overlayView.drawnImage;
for(int y=1;yheight-1; y++) {
for(int x=1; xwidth-1; x++) {
// if we draw to this pixel replace it with the average of the surrounding pixels
if(drawnImage->pixels[y][x]!=0) {
screenImage->pixels[y][x]=(screenImage[y-1][x]+screenImage[y+1][x]+
screenImage[y][x-1]+screenImage[y][x+1])/4;
}
}
}
That's it! The full sample app is available here: http://dl.dropbox.com/u/508075/augmented_reality/AugmentedRealitySample.zip

+++++
+++++

[自:PS:使用[view.layer renderInContext:] 肯定是失败的,得到的是黑屏.]
+++++
+++++
[自:要获取Image的数据内容可以使用:
CFDataRef CreateDataFromImage ( UIImage *image )
{
return CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
}
但是,这个方法我并没有测试过.]
+++++

posted on 2012-12-26 10:17 佳为好友 阅读(574) 评论(0)  编辑 收藏 引用 所属分类: UI


只有注册用户登录后才能发表评论。
网站导航: 博客园   IT新闻   BlogJava   知识库   博问   管理


导航

<2012年12月>
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345

留言簿(1)

随笔分类

搜索

最新评论

评论排行榜