2010-11-30 6 views
1

「AVCoundation Programming Guide」の「メディアキャプチャをまとめてみましょう」という例を呼び出そうとしています。私は空白の(黒の)画像を得続けます。このコードでカメラにアクセスするには、最初に呼び出す必要がありますか?iphone 4で空白の画像を返すAVCaptureSession

-(void) setupCapture { 
AVCaptureSession *session = [[AVCaptureSession alloc] init]; 
session.sessionPreset = AVCaptureSessionPresetLow; 

AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 

NSError *error = nil; 
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; 
if (!input) { 
    // Handle the error appropriately. 
    NSLog(@"no input"); 
    return; 
} 
[session addInput:input]; 

AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease]; 
[session addOutput:output]; 
output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] 
                forKey:(id)kCVPixelBufferPixelFormatTypeKey]; 
output.minFrameDuration = CMTimeMake(1, 15); 

dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL); 
[output setSampleBufferDelegate:self queue:queue]; 
dispatch_release(queue); 

[session startRunning]; 
} 

UIImage *imageFromSampleBuffer(CMSampleBufferRef sampleBuffer) { 
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
// Lock the base address of the pixel buffer. 
CVPixelBufferLockBaseAddress(imageBuffer,0); 

// Get the number of bytes per row for the pixel buffer. 
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
// Get the pixel buffer width and height. 
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer); 

// Create a device-dependent RGB color space. 
static CGColorSpaceRef colorSpace = NULL; 
if (colorSpace == NULL) { 
    colorSpace = CGColorSpaceCreateDeviceRGB(); 
    if (colorSpace == NULL) { 
     // Handle the error appropriately. 
     return nil; 
    } 
} 

// Get the base address of the pixel buffer. 
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 
// Get the data size for contiguous planes of the pixel buffer. 
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer); 

// Create a Quartz direct-access data provider that uses data we supply. 
CGDataProviderRef dataProvider = 
CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL); 
// Create a bitmap image from data supplied by the data provider. 
CGImageRef cgImage = 
CGImageCreate(width, height, 8, 32, bytesPerRow, 
       colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little, 
       dataProvider, NULL, true, kCGRenderingIntentDefault); 
CGDataProviderRelease(dataProvider); 

// Create and return an image object to represent the Quartz image. 
UIImage *image = [UIImage imageWithCGImage:cgImage]; 
CGImageRelease(cgImage); 

CVPixelBufferUnlockBaseAddress(imageBuffer, 0); 

return image; 
} 

と私のコールバックメソッドは次のとおりです: がメインのviewDidLoadメソッドでcapturesessionを開始しませんが、少し:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
    fromConnection:(AVCaptureConnection *)connection { 

UIImage *image = imageFromSampleBuffer(sampleBuffer); 
NSLog(@"am I nil?: %@", image); 
self.imageV.image = image; 
[self.view setNeedsDisplay]; 
} 

答えて

2

先端1 おかげ

これは例から変更されていないコードがあります後で ヒント2: キャプチャセッションのコールバックメソッドであなたのUIを更新しないでください。メインスレッドで行います。

+0

あなた自身の答えをアシストすることができます。 –

関連する問題