我想要在它们之间有一个梯度的过渡 – 所以它们之间没有粗略的界限.有可能有这样的效果吗?
编辑:
我想要实现的效果应该是这样(顶部来自相机预览,而底部应该是一个地图…):
在iOS上,与CameraOverlay类似的效果显示地图,并将层masp设置为渐变:
CAGradientLayer *gradient = [CAGradientLayer layer]; gradient.frame = self.map.bounds; gradient.colors = [NSArray arrayWithObjects:(id)[[UIColor colorWithWhite: 1.0 alpha: 0.0] CGColor],(id)[[UIColor colorWithWhite: 1.0 alpha: 1.0] CGColor],nil]; gradient.startPoint = CGPointMake(0.5f,0.0f); gradient.endPoint = CGPointMake(0.5f,0.5f); self.map.layer.mask = gradient;
解决方法
编辑:正如mr_archano指出的那样,API(现在)被定义为没有SurfaceView,摄像头就不会发送预览数据.哼哼,这是进步的本质,但是这也是最重要的.
>“下”SurfaceView由摄像机预览机制直接驱动.
>“中间”SurfaceView是针对MAPS API的.
>“上”视图是摄像机数据呈现以达到预期效果的位置.
核心代码因此在“相机预览”上提供“相机预览”,并且上部图片被有意地扭曲,因此它在顶部完全清晰可见,在中间淡出并且在底部.
我可以建议使用这个代码的最好方法是自己实现这四个步骤,看看它的工作,然后添加两个最后的步骤,看看工作,然后将关键概念插入另一个,无疑是更大更多复杂的代码片段.
前四步:
>创建自定义视图以显示到顶部,相机,视图.这个类对它下面的任何内容都进行了一个位图.位图中每个像素中的alpha值将决定下图中多少通过.
public class CameraOverlayView extends View { private Paint paint; private Size incomingSize; private Bitmap bitmap = null; public CameraOverlayView(Context context) { super(context); init(); } public CameraOverlayView(Context context,AttributeSet attrs) { super(context,attrs); init(); } private void init() { paint = new Paint(); paint.setStyle(Style.FILL_AND_STROKE); paint.setColor(0xffffffff); paint.setTextSize((float) 20.0); } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); int width = canvas.getWidth(); int height = canvas.getHeight(); canvas.drawBitmap(bitmap,0.0f,paint); } }
>将三个视图放在一个框架中,它们都在两个方向都设置为ff_p_parent.第一个将是“下面”(SurfaceView,因此相机预览可以工作).第二个“中间”(地图或任何地图的视图).第三个“顶部”(褪色的相机图像的视图).
<SurfaceView android:id="@+id/beneathSurfaceView" android:layout_width="fill_parent" android:layout_height="fill_parent" /> <SurfaceView android:id="@+id/middleSurfaceView" android:layout_width="fill_parent" android:layout_height="fill_parent" /> <com.blah.blah.blah.CameraOverlayView android:id="@+id/aboveCameraView" android:layout_width="fill_parent" android:layout_height="fill_parent" />
>删除主要Activity将设置相机,并将自动预览图像发送到(底部)SurfaceView和预览图像数据到处理例程.它设置一个回调来捕获预览数据.这两个并行运行.
public class CameraOverlay extends Activity implements SurfaceHolder.Callback2 { private SurfaceView backSV; private CameraOverlayView cameraV; private SurfaceHolder cameraH; private Camera camera=null; private Camera.PreviewCallback cameraCPCB; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.camera_overlay); // Get the two views backSV = (SurfaceView) findViewById(R.id.beneathSurfaceView); cameraV = (CameraOverlayView) findViewById(R.id.aboveCameraView); // BACK: Putting the camera on the back SV (replace with whatever is driving that SV) cameraH = backSV.getHolder(); cameraH.addCallback(this); // FRONT: For getting the data from the camera (for the front view) cameraCPCB = new Camera.PreviewCallback () { @Override public void onPreviewFrame(byte[] data,Camera camera) { cameraV.acceptCameraData(data,camera); } }; } // Making the camera run and stop with state changes @Override public void onResume() { super.onResume(); camera = Camera.open(); camera.startPreview(); } @Override public void onPause() { super.onPause(); camera.setPreviewCallback(null); camera.stopPreview(); camera.release(); camera=null; } private void cameraImageToViewOn() { // FRONT cameraV.setIncomingSize(camera.getParameters().getPreviewSize()); camera.setPreviewCallback(cameraCPCB); } private void cameraImageToViewOff() { // FRONT camera.setPreviewCallback(null); } // The callbacks which mean that the Camera does stuff ... @Override public void surfaceChanged(SurfaceHolder holder,int format,int width,int height) { // If your preview can change or rotate,take care of those events here. // Make sure to stop the preview before resizing or reformatting it. if (holder == null) return; // stop preview before making changes try { cameraImageToViewOff(); // FRONT camera.stopPreview(); } catch (Exception e){ // ignore: tried to stop a non-existent preview } // set preview size and make any resize,rotate or reformatting changes here // start preview with new settings try { camera.setPreviewDisplay(holder); //BACK camera.startPreview(); cameraImageToViewOn(); // FRONT } catch (Exception e){ } } @Override public void surfaceCreated(SurfaceHolder holder) { try { camera.setPreviewDisplay(holder); //BACK camera.startPreview(); cameraImageToViewOn(); // FRONT } catch (IOException e) { } } @Override public void surfaceDestroyed(SurfaceHolder holder) { } @Override public void surfaceRedrawNeeded(SurfaceHolder holder) { } }
有些事情缺少:
确保相机图像是正确的方向
>确保相机预览图像是最佳尺寸
>现在,在第一步中创建的视图中添加两个函数.第一个确保视图知道传入的图像数据的大小.第二个接收预览图像数据,将其转换为位图,使其沿着可见性扭曲,并显示Alpha渐变.
public void setIncomingSize(Size size) { incomingSize = size; if (bitmap != null) bitmap.recycle(); bitmap = Bitmap.createBitmap(size.width,size.height,Bitmap.Config.ARGB_8888); } public void acceptCameraData(byte[] data,Camera camera) { int width = incomingSize.width; int height = incomingSize.height; // the bitmap we want to fill with the image int numPixels = width*height; // the buffer we fill up which we then fill the bitmap with IntBuffer intBuffer = IntBuffer.allocate(width*height); // If you're reusing a buffer,next line imperative to refill from the start,- if not good practice intBuffer.position(0); // Get each pixel,one at a time int Y; int xby2,yby2; int R,G,B,alpha; float U,V,Yf; for (int y = 0; y < height; y++) { // Set the transparency based on how far down the image we are: if (y<200) alpha = 255; // This image only at the top else if (y<455) alpha = 455-y; // Fade over the next 255 lines else alpha = 0; // nothing after that // For speed's sake,you should probably break out of this loop once alpha is zero ... for (int x = 0; x < width; x++) { // Get the Y value,stored in the first block of data // The logical "AND 0xff" is needed to deal with the signed issue Y = data[y*width + x] & 0xff; // Get U and V values,stored after Y values,one per 2x2 block // of pixels,interleaved. Prepare them as floats with correct range // ready for calculation later. xby2 = x/2; yby2 = y/2; U = (float)(data[numPixels + 2*xby2 + yby2*width] & 0xff) - 128.0f; V = (float)(data[numPixels + 2*xby2 + 1 + yby2*width] & 0xff) - 128.0f; // Do the YUV -> RGB conversion Yf = 1.164f*((float)Y) - 16.0f; R = (int)(Yf + 1.596f*V); G = 2*(int)(Yf - 0.813f*V - 0.391f*U); // Distorted to show effect B = (int)(Yf + 2.018f*U); // Clip rgb values to 0-255 R = R < 0 ? 0 : R > 255 ? 255 : R; G = G < 0 ? 0 : G > 255 ? 255 : G; B = B < 0 ? 0 : B > 255 ? 255 : B; // Put that pixel in the buffer intBuffer.put(Color.argb(alpha,R,B)); } } // Get buffer ready to be read intBuffer.flip(); // Push the pixel information from the buffer onto the bitmap. bitmap.copyPixelsFromBuffer(intBuffer); this.invalidate(); }
关于第二个例程的注意事项:
>它假设入射摄像机格式NV21.其他的可能是可以使用的,即使是痛苦也是有保证的.见Converting YUV->RGB(Image processing)->YUV during onPreviewFrame in android?.
>它可能会更快,更好,更新版本的Android和一些代码优化.
>将相机表面视图设置得足够小以隐藏在顶视图的非褪色部分之后.即,将android:layout_height更改为,例如60dp.>设置中间的SurfaceView接收地图信息.