两个最重要的的地方:
继承 GPUImageOutput 且遵循 GPUImageInput 的 filter,处理完成后输出又可以作为下一个 filter 的输入。
- 1 @protocolGPUImageInput
- 2- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
- 3- (void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex;
- 4- (NSInteger)nextAvailableTextureIndex;
- 5- (void)setInputSize:(CGSize)newSize atIndex:(NSInteger)textureIndex;
- 6- (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;
- 7- (CGSize)maximumOutputSize;
- 8- (void)endProcessing;
- 9- (BOOL)shouldIgnoreUpdatesToThisTarget;
- 10- (BOOL)enabled;
- 11- (BOOL)wantsMonochromeInput;
- 12- (void)setCurrentlyReceivingMonochromeInput:(BOOL)newValue;
- 13 @end
framebuffer 的封装类,根据 onlyGenerateTexture 判断 只生成纹理 或 framebuffer;摘自 - (void)generateFramebuffer;
如果支持快速上传纹理,CVPixelBufferCreate 生成 renderTarget,CVOpenGLESTextureCacheCreateTextureFromImage 根据 renderTarget(sourceImage) 生成 renderTexture,最后调用 glFramebufferTexture2D 将 framebuffer 和 renderTexture 绑定在一块,framebuffer 输出到 texture(注:framebuffer 也可以绑定到 renderBuffer,也常称为 colorbuffer,renderbuffer 直接显示在 CALayer 上了;绑定在 texture 上通常作为中间值);
如果不支持;先 generate texture,再绑定,glTexImage2D 上传数据到 GPU,最后调用 glFramebufferTexture2D 将 framebuffer 和 texture 绑定在一块;
GPUImageFramebuffer 中的 - (CGImageRef)newCGImageFromFramebufferContents;,用于从 framebuffer 中取出图像数据生成 CGImageRef;
- 1CGDataProviderRef dataProvider = NULL;
- 2 if ([GPUImageContext supportsFastTextureUpload])
- 3 {
- 4 #ifTARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE 5NSUInteger paddedWidthOfImage = CVPixelBufferGetBytesPerRow(renderTarget) /4.0;//字节对齐后的图片占用宽度可能要大
- 6NSUInteger paddedBytesForImage = paddedWidthOfImage * (int)_size.height *4;
- 7
- 8 glFinish();//强制提交前面调用的gl指令到GPU硬件,阻塞调用
- 9CFRetain(renderTarget);//防止出现野指针,在回调中释放 I need to retain the pixel buffer here and release in the data source callback to prevent its bytes from being prematurely deallocated during a photo write operation
- 10 [self lockForReading];
- 11rawImagePixels = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
- 12dataProvider = CGDataProviderCreateWithData((__bridge_retainedvoid*)self, rawImagePixels, paddedBytesForImage, dataProviderUnlockCallback);//全局的framebuffercache强引用当前自身,防止framebuffer在切换时出现问题
- 13[[GPUImageContext sharedFramebufferCache] addFramebufferToActiveImageCaptureList:self];// In case the framebuffer is swapped out on the filter, need to have a strong reference to it somewhere for it to hang on while the image is in existence
- 14 #else
- 15 #endif
- 16 }
- 17 else
- 18 {
- 19 [self activateFramebuffer];
- 20rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
- 21glReadPixels(0,0, (int)_size.width, (int)_size.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);//阻塞调用,直接从framebuffer中读取image 原始数据
- 22dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
- 23[self unlock];// Don't need to keep this around anymore
- 24}
最后 CGImageCreate 生成图片返回。
类的说明其实已经很明了,视频采集,拍照等都是以它为基类,同意套路:源(视频,静态图)上传图片帧给 OpenGL ES 作为 textures,这些 textures 作为下一个 filter 的输入,形成处理 texture 的链式结构。
- /** GPUImage's base source object
- Images or frames of video are uploaded from source objects, which are subclasses of GPUImageOutput. These include:
- - GPUImageVideoCamera (for live video from an iOS camera)
- - GPUImageStillCamera (for taking photos with the camera)
- - GPUImagePicture (for still images)
- - GPUImageMovie (for movies)
- Source objects upload still image frames to OpenGL ES as textures, then hand those textures off to the next objects in the processing chain.
- */
类中工具函数 runSynchronouslyOnContextQueue 等,通过 dispatch_get_specific 防止死锁,注意不要用 dispatch_get_current_queue;
通过对三个典型类的作用解读,分别为 (GPUImagePicture)source->(GPUImageFilter)filter->(GPUImageView)output,形成处理链式结构,当然还有其他的 pipeline。
GPUImagePicture 只继承 GPUImageOutput,专门用作读取输入数据,上传 GPU,交个链条下一步 GPUImageFilter 处理。
GPUImageFilter(实际开发中通常用到它的子类)继承 GPUImageOutput,同时遵循 GPUImageInput 协议,类的说明如下
- /** GPUImage's base filter class
- Filters and other subsequent elements in the chain conform to the GPUImageInput protocol,which lets them take in the supplied or processed texture from the previous link in the chain and do something with it.Objects one step further down the chain are considered targets,and processing can be branched by adding multiple targets to a single output or filter.
- */
- (void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex;
{
firstInputFramebuffer = newInputFramebuffer;
[firstInputFramebuffer lock];
}
- - (void) newFrameReadyAtTime: (CMTime) frameTime atIndex: (NSInteger) textureIndex; {
- static const GLfloat imageVertices[] = { - 1.0f,
- -1.0f,
- 1.0f,
- -1.0f,
- -1.0f,
- 1.0f,
- 1.0f,
- 1.0f,
- }; //顶点数据,两个三角形组成texture区域
- [self renderToTextureWithVertices: imageVertices textureCoordinates: [[self class] textureCoordinatesForRotation: inputRotation]]; [self informTargetsAboutNewFrameAtTime: frameTime];
- }
- - (void)renderToTextureWithVertices:(constGLfloat *)vertices textureCoordinates:(constGLfloat *)textureCoordinates;
- {
- if (self.preventRendering)
- {
- [firstInputFramebuffer unlock];
- return;
- }
- [GPUImageContext setActiveShaderProgram:filterProgram];
- /*** 从GPUImageFrameBufferCache中取出可重用的outputFramebuffer***/
- outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize: [self sizeOfFBO] textureOptions: self.outputTextureOptions onlyTexture: NO];
- [outputFramebuffer activateFramebuffer];
- if (usingNextFrameForImageCapture)
- {
- [outputFramebuffer lock];
- }
- [self setUniformsForProgramAtIndex:0];
- glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
- glClear(GL_COLOR_BUFFER_BIT);
- glActiveTexture(GL_TEXTURE2);//选择GL_TEXTURE2glBindTexture(GL_TEXTURE_2D, [firstInputFramebuffer texture]);//绑定当前输入的framebuffer中的textureglUniform1i(filterInputTextureUniform,2);//分别设置顶点shader中的顶点数据,和将来用于片元shader中的texture坐标数据glVertexAttribPointer(filterPositionAttribute,2, GL_FLOAT,0,0, vertices);
- glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT,0,0, textureCoordinates);
- glDrawArrays(GL_TRIANGLE_STRIP, 0,4);
- [firstInputFramebuffer unlock];
- if (usingNextFrameForImageCapture)
- {
- dispatch_semaphore_signal(imageCaptureSemaphore);
- }
- }
informTargetsAboutNewFrameAtTime 中轮询两次当前的 target,第一次大致还是调用父类的 setInputFramebufferForTarget(父类 GPUImageOutput),第二次继续 newFrameReadyAtTime,又回到了从 source 添加 target 的原点。
作为最终的输出 target 只实现了 GPUImageInput 的协议,只能接受 source 或者 filter 传过来的数据,不再作为输出了;
其中的 setInputFramebuffer 和 newFrameReadyAtTime 和 filter 中处理如出一辙,但是加了一个调用;如下,正如开头提到的 framebuffer 也可以绑定到 renderBuffer,也常称为 colorbuffer,renderbuffer 直接显示在 CAEAGLLayer 上了;最终通过设置屏幕大小的缓冲区,直接显示在手机屏幕上。
- - (void)presentFramebuffer;
- {
- glBindRenderbuffer(GL_RENDERBUFFER, displayRenderbuffer);
- [[GPUImageContext sharedImageProcessingContext] presentBufferForDisplay];
- }
其中的 displayRenderBuffer 通过 createDisplayFramebuffer 方法创建,都是些模板代码,没什么可记录的。
GPUImage 的代码结构可谓是链式处理结构的典范,很值得学习;本文只记录了 processing chain(source->filter-->filter...->output)的数据流向,很多细节以后再记录。
GPUImage 源码:https://github.com/BradLarson/GPUImage
来源: http://www.cnblogs.com/edisongz/p/6978020.html