160 likes | 288 Views
Two papers in icfda14. Guimei Zhang MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California, Merced E : guimei.zh@163.com Phone:209-658-4838 Lab : CAS Eng 820 ( T : 228-4398). June 30, 2014. Monday 4:00-6:00 PM
E N D
Two papers in icfda14 Guimei Zhang MESA (Mechatronics, Embedded Systems and Automation)LAB School of Engineering, University of California, Merced E: guimei.zh@163.comPhone:209-658-4838 Lab: CAS Eng 820 (T: 228-4398) June 30, 2014. Monday 4:00-6:00 PM Applied Fractional Calculus Workshop Series @ MESA Lab @ UCMerced
The first paper Paper title: AFC Workshop Series @ MESALAB @ UCMerced
Motivation • Detect and localize objects in single view RGB images, the environments containing arbitrary illumination, much clutter for the purpose of autonomous grasping. • 2. Objects can be of arbitrary color and interior texture, • thus, we assume knowledge of only their 3D model • without any appearance/texture information. • 3. Using 3D models makes an object detector immune to • intra-class texture variations.
Motivation • In this paper, we address the problem of a robot grasping 3D objects of known 3D shape from their projections in single images of cluttered scenes. • We further abstract the 3D model by only using its 2D Contour and thus detection is driven by the shape of the 3D object’s projected occluding boundary.
Overview of the proposed approach a) The input image b) Edge image used gPb method c) The hypothesis bounding box (red) is segmented into superpixels. d) Theset of superpixels with the closest distance to the model contour is selected. e)three textured synthetic views of the final pose estimate are shown.
How to do • 3D model acquisition and rendering (use a low-cost RGB-D depth sensor and a dense surface reconstruction algorithm, KinectFusion) 2. Image feature (edge) 3. Object detection 4. Shape descriptor 5. Shape verification for contour extraction 6. Pose estimation (image registration)
Example (a) bounding boxes ordered by the detection score ( b) Corresponding pose output (c) Segmentation of top scored (d) Foreground mask selected by shape (e) Three iterations in pose refinement (f) Visualization of PR2 model with the Kinect point cloud (g) Another view of the same scene
The second paper Paper title:
Motivation Problems: • big and complex scenes, there must be many 3D point clouds, which need human label and will result in to spend much time. • Considering the bias problem of model learning caused by bias accumulation in a sample collection
Motivation Therefore, this paper proposes a semi-supervised method to learn category models from unlabeled “big point cloud data”. The algorithm only requires to label a small number of object seeds in each object category to start the model learning, as shown in Fig. 1. Such design saves both the manual labeling and computation cost to satisfy the model-mining efficiency requirement.
The main contributions • To the best of our knowledge, this is the first proposal for an efficient mining of category models from “big point cloud data”. With limited computation and human labeling, the method is oriented toward an efficient construction of a category model base. • A multiple-model strategy is proposed as a solution to the bias problem, and provides several discrete and selective category boundaries.
Expermient Model-based point labeling results. Different colors indicate different categories, i.e. wall (green), tree (red), and street (blue).