How to scale/resize CVPixelBufferRef in objective C, iOS

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP


How to scale/resize CVPixelBufferRef in objective C, iOS



I am trying to resize an image from a CVPixelBufferRef to 299x299.
Ideally is would also crop the image. The original pixelbuffer is 640x320, the goal is to scale/crop to 299x299 without loosing aspect ratio (crop to center).



I found code to resize a UIImage in objective c, but none to resize a CVPixelBufferRef. I have found various very complicated examples of object C many different image types, but none specifically for resizing a CVPixelBufferRef.



What is the easiest/best way to do this, please include the exact code.



... I tried the answer from selton, but this did not work as the resulting type in the scaled buffer is not correct (goes into assert code),


OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
int doReverseChannels;
if (kCVPixelFormatType_32ARGB == sourcePixelFormat) {
doReverseChannels = 1;
} else if (kCVPixelFormatType_32BGRA == sourcePixelFormat) {
doReverseChannels = 0;
} else {
assert(false); // Unknown source format
}



This question has not received enough attention.





It's unclear why you posted the if-else statements. If you included more about how you tried selton's answer, perhaps he could help you more. What is the subsequent code that has the pixel format requirements?
– Allen Humphreys
2 days ago







The original format of the pixel buffer was, kCVPixelFormatType_32ARGB, the new scaled buffer must also be kCVPixelFormatType_32ARGB. The selton code is changing the format which triggers the assert(false) exception. How to keep the format the same or be kCVPixelFormatType_32ARGB
– James
yesterday





Thanks @James, I posted an answer. Hope it helps.
– Allen Humphreys
yesterday




3 Answers
3



You can consider using CIImage:


CIImage


CIImage *image = [CIImage imageWithCVPixelBuffer:pxbuffer];
CIImage *scaledImage = [image imageByApplyingTransform:(CGAffineTransformMakeScale(0.1, 0.1))];
CVPixelBufferRef scaledBuf = [scaledImage pixelBuffer];



You should change the scale to fit your dest size.





This did not work. The type seems to be wrong in the scaled buffer, OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer); int doReverseChannels; if (kCVPixelFormatType_32ARGB == sourcePixelFormat) { doReverseChannels = 1; } else if (kCVPixelFormatType_32BGRA == sourcePixelFormat) { doReverseChannels = 0; } else { assert(false); // Unknown source format }
– James
Jul 24 at 18:08







[scaledImage pixelBuffer] returns nil, and CVPixelBufferGetPixelFormatType(nil) returns 0.
– Allen Humphreys
4 hours ago




[scaledImage pixelBuffer]


nil


CVPixelBufferGetPixelFormatType(nil)



Using CoreMLHelpers as an inspiration. We can create a C function that does what you need. Based on your pixel format requirements, I think this solution will be the most efficient option. I used an AVCaputureVideoDataOutput for testing.


AVCaputureVideoDataOutput



I hope this helps!



AVCaptureVideoDataOutputSampleBufferDelegate implementation. The majority of the work here is creating a centered-cropping rectangle. Making use of AVMakeRectWithAspectRatioInsideRect is key (it does exactly what you want).


AVCaptureVideoDataOutputSampleBufferDelegate


AVMakeRectWithAspectRatioInsideRect


- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; {

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer == NULL) { return; }

size_t height = CVPixelBufferGetHeight(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);

CGRect videoRect = CGRectMake(0, 0, width, height);
CGSize scaledSize = CGSizeMake(299, 299);

// Create a rectangle that meets the output size's aspect ratio, centered in the original video frame
CGRect centerCroppingRect = AVMakeRectWithAspectRatioInsideRect(scaledSize, videoRect);

CVPixelBufferRef croppedAndScaled = createCroppedPixelBuffer(pixelBuffer, centerCroppingRect, scaledSize);

// Do other things here
// For example
CIImage *image = [CIImage imageWithCVImageBuffer:croppedAndScaled];
// End example

CVPixelBufferRelease(croppedAndScaled);
}



The basic premise of this function is that it first crops to the specified rectangle then scales to the final desired size. The cropping is achieved by simply ignoring the data outside the rectangle. Scaling is achieved through Accelerate's vImageScale_ARGB8888 function. Again, thanks to CoreMLHelpers for the insight.


vImageScale_ARGB8888


CoreMLHelpers


void assertCropAndScaleValid(CVPixelBufferRef pixelBuffer, CGRect cropRect, CGSize scaleSize) {
CGFloat originalWidth = (CGFloat)CVPixelBufferGetWidth(pixelBuffer);
CGFloat originalHeight = (CGFloat)CVPixelBufferGetHeight(pixelBuffer);

assert(CGRectContainsRect(CGRectMake(0, 0, originalWidth, originalHeight), cropRect));
assert(scaleSize.width > 0 && scaleSize.height > 0);
}

void pixelBufferReleaseCallBack(void *releaseRefCon, const void *baseAddress) {
if (baseAddress != NULL) {
free((void *)baseAddress);
}
}

// Returns a CVPixelBufferRef with +1 retain count
CVPixelBufferRef createCroppedPixelBuffer(CVPixelBufferRef sourcePixelBuffer, CGRect croppingRect, CGSize scaledSize) {

OSType inputPixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
assert(inputPixelFormat == kCVPixelFormatType_32BGRA
|| inputPixelFormat == kCVPixelFormatType_32ABGR
|| inputPixelFormat == kCVPixelFormatType_32ARGB
|| inputPixelFormat == kCVPixelFormatType_32RGBA);

assertCropAndScaleValid(sourcePixelBuffer, croppingRect, scaledSize);

if (CVPixelBufferLockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly) != kCVReturnSuccess) {
NSLog(@"Could not lock base address");
return nil;
}

void *sourceData = CVPixelBufferGetBaseAddress(sourcePixelBuffer);
if (sourceData == NULL) {
NSLog(@"Error: could not get pixel buffer base address");
CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
return nil;
}

size_t sourceBytesPerRow = CVPixelBufferGetBytesPerRow(sourcePixelBuffer);
size_t offset = CGRectGetMinY(croppingRect) * sourceBytesPerRow + CGRectGetMinX(croppingRect) * 4;

vImage_Buffer croppedvImageBuffer = {
.data = sourceData + offset,
.height = CGRectGetHeight(croppingRect),
.width = CGRectGetWidth(croppingRect),
.rowBytes = sourceBytesPerRow
};

size_t scaledBytesPerRow = scaledSize.width * 4;
void *scaledData = malloc(scaledSize.height * scaledBytesPerRow);
if (scaledData == NULL) {
NSLog(@"Error: out of memory");
CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
return nil;
}

vImage_Buffer scaledvImageBuffer = {
.data = scaledData,
.height = scaledSize.height,
.width = scaledSize.width,
.rowBytes = scaledBytesPerRow
};

/* The ARGB8888, ARGB16U, ARGB16S and ARGBFFFF functions work equally well on
* other channel orderings of 4-channel images, such as RGBA or BGRA.*/
vImage_Error error = vImageScale_ARGB8888(&croppedvImageBuffer, &scaledvImageBuffer, nil, 0);
CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);

if (error != kvImageNoError) {
NSLog(@"Error: %ld", error);
free(scaledData);
return nil;
}

OSType pixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
CVPixelBufferRef outputPixelBuffer = NULL;
CVReturn status = CVPixelBufferCreateWithBytes(nil, scaledSize.width, scaledSize.height, pixelFormat, scaledData, scaledBytesPerRow, pixelBufferReleaseCallBack, nil, nil, &outputPixelBuffer);

if (status != kCVReturnSuccess) {
NSLog(@"Error: could not create new pixel buffer");
free(scaledData);
return nil;
}

return outputPixelBuffer;
}



This method is much simpler to read, and has the benefit of being pretty agnostic to the pixel buffer format you pass in, which is a plus for certain use cases. Granted, you're limited to which formats CoreImage supports.


CVPixelBufferRef createCroppedPixelBufferCoreImage(CVPixelBufferRef pixelBuffer,
CGRect cropRect,
CGSize scaleSize,
CIContext *context) {

assertCropAndScaleValid(pixelBuffer, cropRect, scaleSize);

CIImage *image = [CIImage imageWithCVImageBuffer:pixelBuffer];
image = [image imageByCroppingToRect:cropRect];

CGFloat scaleX = scaleSize.width / CGRectGetWidth(image.extent);
CGFloat scaleY = scaleSize.height / CGRectGetHeight(image.extent);

image = [image imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];

// Due to the way [CIContext:render:toCVPixelBuffer] works, we need to translate the image so the cropped section is at the origin
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-image.extent.origin.x, -image.extent.origin.y)];

CVPixelBufferRef output = NULL;

CVPixelBufferCreate(nil,
CGRectGetWidth(image.extent),
CGRectGetHeight(image.extent),
CVPixelBufferGetPixelFormatType(pixelBuffer),
nil,
&output);

if (output != NULL) {
[context render:image toCVPixelBuffer:output];
}

return output;
}



Step 1



Convert the CVPixelBuffer to UIImage by starting with [CIImage imageWithCVPixelBuffer: then converting that CIImage to CGImage then that CGImage to UIImage using the standard methods.


[CIImage imageWithCVPixelBuffer:


CIImage *ciimage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimage = [context
createCGImage:ciimage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];

UIImage *uiimage = [UIImage imageWithCGImage:cgimage];
CGImageRelease(cgimage);



Step 2



Scale the image to desired size/cropping by placing it in a UIImageView


UIImageView *imageView = [[UIImageView alloc] initWithFrame:/*CGRect with new dimensions*/];
imageView.contentMode = /*UIViewContentMode with desired scaling/clipping style*/;
imageView.image = uiimage;



Step 3



Snapshot the CALayer of said imageView with something like this:


#define snapshotOfView(__view) (
(^UIImage *(void) {
CGRect __rect = [__view bounds];
UIGraphicsBeginImageContextWithOptions(__rect.size, /*(BOOL)Opaque*/, /*(float)scaleResolution*/);
CGContextRef __context = UIGraphicsGetCurrentContext();
[__view.layer renderInContext:__context];
UIImage *__image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return __image;
})()
)



In use:


uiimage = snapshotOfView(imageView);



Step 4



Convert said UIImage-snapshot image (cropped/scaled) back into a CVPixelBuffer using a method like this: https://stackoverflow.com/a/34990820/2057171



That is,


- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = @{
(NSString*)kCVPixelBufferCGImageCompatibilityKey : @YES,
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : @YES,
};

CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
if (status!=kCVReturnSuccess) {
NSLog(@"Operation failed");
}
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);

CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGAffineTransform flipVertical = CGAffineTransformMake( 1, 0, 0, -1, 0, CGImageGetHeight(image) );
CGContextConcatCTM(context, flipVertical);
CGAffineTransform flipHorizontal = CGAffineTransformMake( -1.0, 0.0, 0.0, 1.0, CGImageGetWidth(image), 0.0 );
CGContextConcatCTM(context, flipHorizontal);

CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}



In use:


pixelBuffer = [self pixelBufferFromCGImage:uiimage];





Is it really required to create a CIImage, CGImage, UIImage, and a UIImageView, just to scale a CVPixelBuffer?
– James
yesterday





Please include the complete code.
– James
yesterday





@James I'm not sure that it's required per se; but it would likely be the easiest way of accomplishing this. I don't know of a built in way to just scale it, recreating at a new size is simple enough though.
– Albert Renshaw
yesterday





@James Included all necessary code. Be sure to include all proper frameworks such as CoreGraphics CoreImage etc
– Albert Renshaw
yesterday







There are several draw backs to this method. 1. Considerable wasted CPU time. 2. Considerable wasted memory bandwidth 3. Requires use of the main thread (UIImageView) 4. Following the steps resulted in a garbled image.
– Allen Humphreys
4 hours ago






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

Keycloak server returning user_not_found error when user is already imported with LDAP

PHP parse/syntax errors; and how to solve them?