Tum rbg. The sequence selected is the same as the one used to generate Figure 1 of the paper. Tum rbg

 
 The sequence selected is the same as the one used to generate Figure 1 of the paperTum rbg de / <a href=[email protected]" style="filter: hue-rotate(-230deg) brightness(1.05) contrast(1.05);" />

github","contentType":"directory"},{"name":". There are two. Then Section 3 includes experimental comparison with the original ORB-SLAM2 algorithm on TUM RGB-D dataset (Sturm et al. In contrast to previous robust approaches of egomotion estimation in dynamic environments, we propose a novel robust VO based on. g. This is not shown. {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":". 2. libs contains options for training, testing and custom dataloaders for TUM, NYU, KITTI datasets. By doing this, we get precision close to Stereo mode with greatly reduced computation times. Students have an ITO account and have bought quota from the Fachschaft. ORB-SLAM2 is a real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (in the stereo and RGB-D case with true scale). de / [email protected](PTR record of primary IP) Recent Screenshots. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article. For those already familiar with RGB control software, it may feel a tad limiting and boring. An Open3D Image can be directly converted to/from a numpy array. The Technical University of Munich (TUM) is one of Europe’s top universities. Seen 143 times between April 1st, 2023 and April 1st, 2023. 01:50:00. Among various SLAM datasets, we've selected the datasets provide pose and map information. The presented framework is composed of two CNNs (depth CNN and pose CNN) which are trained concurrently and tested. Full size table. It contains walking, sitting and desk sequences, and the walking sequences are mainly utilized for our experiments, since they are highly dynamic scenarios where two persons are walking back and forth. We conduct experiments both on TUM RGB-D dataset and in real-world environment. Most of the segmented parts have been properly inpainted with information from the static background. The sensor of this dataset is a handheld Kinect RGB-D camera with a resolution of 640 × 480. tum. Deep Model-Based 6D Pose Refinement in RGB Fabian Manhardt1∗, Wadim Kehl2∗, Nassir Navab1, and Federico Tombari1 1 Technical University of Munich, Garching b. Map Points: A list of 3-D points that represent the map of the environment reconstructed from the key frames. SLAM and Localization Modes. Each file is listed on a separate line, which is formatted like: timestamp file_path RGB-D data. The depth here refers to distance. This repository is linked to the google site. We extensively evaluate the system on the widely used TUM RGB-D dataset, which contains sequences of small to large-scale indoor environments, with respect to different parameter combinations. Content. Among various SLAM datasets, we've selected the datasets provide pose and map information. However, only a small number of objects (e. Covisibility Graph: A graph consisting of key frame as nodes. de. in. © RBG Rechnerbetriebsgruppe Informatik, Technische Universität München, 2013–2018, [email protected] provide one example to run the SLAM system in the TUM dataset as RGB-D. Here, RGB-D refers to a dataset with both RGB (color) images and Depth images. Classic SLAM approaches typically use laser range. 159. 2. TUM data set consists of different types of sequences, which provide color and depth images with a resolution of 640 × 480 using a Microsoft Kinect sensor. g. 822841 fy = 542. It also comes with evaluation tools forRGB-Fusion reconstructed the scene on the fr3/long_office_household sequence of the TUM RGB-D dataset. GitHub Gist: instantly share code, notes, and snippets. 39% red, 32. The human body masks, derived from the segmentation model, are. 576870 cx = 315. navab}@tum. The TUM RGB-D dataset , which includes 39 sequences of offices, was selected as the indoor dataset to test the SVG-Loop algorithm. Use directly pixel intensities!The feasibility of the proposed method was verified by testing the TUM RGB-D dataset and real scenarios using Ubuntu 18. The process of using vision sensors to perform SLAM is particularly called Visual. Features include: Automatic lecture scheduling and access management coupled with CAMPUSOnline. The images contain a slight jitter of. 38: AS4837: CHINA169-BACKBONE CHINA. In this repository, the overall dataset chart is represented as simplified version. TUM RBG-D can be used with TUM RGB-D or UZH trajectory evaluation tools and has the following format timestamp[s] tx ty tz qx qy qz qw. An Open3D RGBDImage is composed of two images, RGBDImage. This is not shown. tum. We select images in dynamic scenes for testing. tum. Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors. Furthermore, the KITTI dataset. Two popular datasets, TUM RGB-D and KITTI dataset, are processed in the experiments. , illuminance and varied scene settings, which include both static and moving object. Exercises will be held remotely and live on the Thursday slot about each 3 to 4 weeks and will not be recorded. 1 TUM RGB-D Dataset. The TUM dataset is a well-known dataset for evaluating SLAM systems in indoor environments. 159. We evaluate the methods on several recently published and challenging benchmark datasets from the TUM RGB-D and IC-NUIM series. No direct hits Nothing is hosted on this IP. See the settings file provided for the TUM RGB-D cameras. The stereo case shows the final trajectory and sparse reconstruction of the sequence 00 from the KITTI dataset [2]. tum. In particular, our group has a strong focus on direct methods, where, contrary to the classical pipeline of feature extraction and matching, we directly optimize intensity errors. ORB-SLAM3 is the first real-time SLAM library able to perform Visual, Visual-Inertial and Multi-Map SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models. [NYUDv2] The NYU-Depth V2 dataset consists of 1449 RGB-D images showing interior scenes, which all labels are usually mapped to 40 classes. TUM-Live, the livestreaming and VoD service of the Rechnerbetriebsgruppe at the department of informatics and mathematics at the Technical University of Munich. py [-h] rgb_file depth_file ply_file This script reads a registered pair of color and depth images and generates a colored 3D point cloud in the PLY format. Installing Matlab (Students/Employees) As an employee of certain faculty affiliation or as a student, you are allowed to download and use Matlab and most of its Toolboxes. ORB-SLAM2是一套完整的SLAM方案,提供了单目,双目和RGB-D三种接口。. de tombari@in. github","path":". RBG. A pose graph is a graph in which the nodes represent pose estimates and are connected by edges representing the relative poses between nodes with measurement uncertainty [23]. /data/neural_rgbd_data folder. tum. system is evaluated on TUM RGB-D dataset [9]. The video sequences are recorded by an RGB-D camera from Microsoft Kinect at a frame rate of 30 Hz, with a resolution of 640 × 480 pixel. in. The RGB and depth images were recorded at frame rate of 30 Hz and a 640 × 480 resolution. To do this, please write an email to rbg@in. ASN data. General Info Open in Search Geo: Germany (DE) — Domain: tum. Our method named DP-SLAM is implemented on the public TUM RGB-D dataset. tum. support RGB-D sensors and pure localization on previously stored map, two required features for a significant proportion service robot applications. ASN details for every IP address and every ASN’s related domains, allocation date, registry name, total number of IP addresses, and assigned prefixes. 6 displays the synthetic images from the public TUM RGB-D dataset. de. idea","path":". Loop closure detection is an important component of Simultaneous. The data was recorded at full frame rate (30 Hz) and sensor res-olution 640 480. Our extensive experiments on three standard datasets, Replica, ScanNet, and TUM RGB-D show that ESLAM improves the accuracy of 3D reconstruction and camera localization of state-of-the-art dense visual SLAM methods by more than 50%, while it runs up to 10 times faster and does not require any pre-training. In ATY-SLAM system, we employ a combination of the YOLOv7-tiny object detection network, motion consistency detection, and the LK optical flow algorithm to detect dynamic regions in the image. md","contentType":"file"},{"name":"_download. The reconstructed scene for fr3/walking-halfsphere from the TUM RBG-D dynamic dataset. tum. Additionally, the object running on multiple threads means the current frame the object is processing can be different than the recently added frame. idea. txt at the end of a sequence, using the TUM RGB-D / TUM monoVO format ([timestamp x y z qx qy qz qw] of the cameraToWorld transformation). M. Deep learning has promoted the. Both groups of sequences have important challenges such as missing depth data caused by sensor. Our dataset contains the color and depth images of a Microsoft Kinect sensor along the ground-truth trajectory of the sensor. It contains indoor sequences from RGB-D sensors grouped in several categories by different texture, illumination and structure conditions. It provides 47 RGB-D sequences with ground-truth pose trajectories recorded with a motion capture system. 1 Comparison of experimental results in TUM data set. Tumblr / #34526f Hex Color Code. It is able to detect loops and relocalize the camera in real time. This repository is the collection of SLAM-related datasets. While previous datasets were used for object recognition, this dataset is used to understand the geometry of a scene. We provide a large dataset containing RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems. TUM RBG-D can be used with TUM RGB-D or UZH trajectory evaluation tools and has the following format timestamp[s] tx ty tz qx qy qz qw. TUM-Live, the livestreaming and VoD service of the Rechnerbetriebsgruppe at the department of informatics and mathematics at the Technical University of Munichand RGB-D inputs. Similar behaviour is observed in other vSLAM [23] and VO [12] systems as well. AS209335 - TUM-RBG, DE Note: An IP might be announced by multiple ASs. Synthetic RGB-D dataset. This allows to directly integrate LiDAR depth measurements in the visual SLAM. net. Major Features include a modern UI with dark-mode Support and a Live-Chat. ORG top-level domain. Once this works, you might want to try the 'desk' dataset, which covers four tables and contains several loop closures. We show. We provide examples to run the SLAM system in the KITTI dataset as stereo or. globalAuf dieser Seite findet sich alles Wissenwerte zum guten Start mit den Diensten der RBG. Thumbnail Figures from Complex Urban, NCLT, Oxford robotcar, KiTTi, Cityscapes datasets. 1 freiburg2 desk with person The TUM dataset is a well-known dataset for evaluating SLAM systems in indoor environments. These sequences are separated into two categories: low-dynamic scenarios and high-dynamic scenarios. 2 WindowsEdit social preview. We provide the time-stamped color and depth images as a gzipped tar file (TGZ). rbg. In EuRoC format each pose is a line in the file and has the following format timestamp[ns],tx,ty,tz,qw,qx,qy,qz. We conduct experiments both on TUM RGB-D dataset and in the real-world environment. General Info Open in Search Geo: Germany (DE) — Domain: tum. tum. 3 ms per frame in dynamic scenarios using only an Intel Core i7 CPU, and achieves comparable. tum. Contribution. ntp1. de with the following information: First name, Surname, Date of birth, Matriculation number,德国慕尼黑工业大学TUM计算机视觉组2012年提出了一个RGB-D数据集,是目前应用最为广泛的RGB-D数据集。数据集使用Kinect采集,包含了depth图像和rgb图像,以及ground. RBG – Rechnerbetriebsgruppe Mathematik und Informatik Helpdesk: Montag bis Freitag 08:00 - 18:00 Uhr Telefon: 18018 Mail: rbg@in. dataset [35] and real-world TUM RGB-D dataset [32] are two benchmarks widely used to compare and analyze 3D scene reconstruction systems in terms of camera pose estimation and surface reconstruction. It includes 39 indoor scene sequences, of which we selected dynamic sequences to evaluate our system. de (The registered domain) AS: AS209335 - TUM-RBG, DE Note: An IP might be announced by multiple ASs. Monocular SLAM PTAM [18] is a monocular, keyframe-based SLAM system which was the first work to introduce the idea of splitting camera tracking and mapping into parallel threads, and. It is a significant component in V-SLAM (Visual Simultaneous Localization and Mapping) systems. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. In the following section of this paper, we provide the framework of the proposed method OC-SLAM with the modules in the semantic object detection thread and dense mapping thread. This paper uses TUM RGB-D dataset containing dynamic targets to verify the effectiveness of the proposed algorithm. The TUM RGB-D dataset , which includes 39 sequences of offices, was selected as the indoor dataset to test the SVG-Loop algorithm. The sensor of this dataset is a handheld Kinect RGB-D camera with a resolution of 640 × 480. The. github","path":". {"payload":{"allShortcutsEnabled":false,"fileTree":{"Examples/RGB-D":{"items":[{"name":"associations","path":"Examples/RGB-D/associations","contentType":"directory. This may be due to: You've not accessed this login-page via the page you wanted to log in (eg. 159. Welcome to the RBG-Helpdesk! What kind of assistance do we offer? The Rechnerbetriebsgruppe (RBG) maintaines the infrastructure of the Faculties of Computer. Tracking Enhanced ORB-SLAM2. Results of point–object association for an image in fr2/desk of TUM RGB-D data set, where the color of points belonging to the same object is the same as that of the corresponding bounding box. There are great expectations that such systems will lead to a boost of new 3D perception-based applications in the fields of. the initializer is very slow, and does not work very reliably. TUM RGB-D dataset. Unfortunately, TUM Mono-VO images are provided only in the original, distorted form. Gnunet. The data was recorded at full frame rate (30 Hz) and sensor res-olution 640 480. The depth images are already registered w. de registered under . Compared with art-of-the-state methods, experiments on the TUM RBG-D dataset, KITTI odometry dataset, and practical environment show that SVG-Loop has advantages in complex environments with varying light, changeable weather, and dynamic interference. You will need to create a settings file with the calibration of your camera. Check other websites in . rbg. Seen 7 times between July 18th, 2023 and July 18th, 2023. deRBG – Rechnerbetriebsgruppe Mathematik und Informatik Helpdesk: Montag bis Freitag 08:00 - 18:00 Uhr Telefon: 18018 Mail: rbg@in. However, the pose estimation accuracy of ORB-SLAM2 degrades when a significant part of the scene is occupied by moving ob-jects (e. Awesome visual place recognition (VPR) datasets. This is an urban sequence with multiple loop closures that ORB-SLAM2 was able to successfully detect. The sequences contain both the color and depth images in full sensor resolution (640 × 480). de / rbg@ma. 73% improvements in high-dynamic scenarios. Network 131. In this section, our method is tested on the TUM RGB-D dataset (Sturm et al. tum. Example result (left are without dynamic object detection or masks, right are with YOLOv3 and masks), run on rgbd_dataset_freiburg3_walking_xyz: Getting Started. The proposed V-SLAM has been tested on public TUM RGB-D dataset. However, the method of handling outliers in actual data directly affects the accuracy of. The categorization differentiates. Check the list of other websites hosted by TUM-RBG, DE. tum. de email address. tum. 1. , 2012). Every year, its Department of Informatics (ranked #1 in Germany) welcomes over a thousand freshmen to the undergraduate program. In the HSL color space #34526f has a hue of 209° (degrees), 36% saturation and 32% lightness. 0/16 Abuse Contact data. /Datasets/Demo folder. Configuration profiles There are multiple configuration variants: standard - general purpose 2. tum. . SLAM and Localization Modes. Each light has 260 LED beads and high CRI 95+, which makes the pictures and videos taken more natural and beautiful. The results demonstrate the absolute trajectory accuracy in DS-SLAM can be improved by one order of magnitude compared with ORB-SLAM2. This is not shown. This paper adopts the TUM dataset for evaluation. C. You need to be registered for the lecture via TUMonline to get access to the lecture via live. Motchallenge. 576870 cx = 315. Rum Tum Tugger is a principal character in Cats. {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":"README. Totally Accurate Battlegrounds (TABG) is a parody of the Battle Royale genre. Engel, T. deDataset comes from TUM Department of Informatics of Technical University of Munich, each sequence of the TUM benchmark RGB-D dataset contains RGB images and depth images recorded with a Microsoft Kinect RGB-D camera in a variety of scenes and the accurate actual motion trajectory of the camera obtained by the motion capture system. sequences of some dynamic scenes, and has the accurate. Livestream on Artemis → Lectures or live. Our method named DP-SLAM is implemented on the public TUM RGB-D dataset. Downloads livestrams from live. de / rbg@ma. 31,Jin-rong Street, CN: 2: 4837: 23776029: 0. vmcarle30. The key constituent of simultaneous localization and mapping (SLAM) is the joint optimization of sensor trajectory estimation and 3D map construction. We provide a large dataset containing RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems. DE zone. in. We use the calibration model of OpenCV. Мюнхенський технічний університет (нім. however, the code for the orichid color is E6A8D7, not C0448F as it says, since it already belongs to red violet. The TUM RGB-D dataset’s indoor instances were used to test their methodology, and they were able to provide results that were on par with those of well-known VSLAM methods. An Open3D RGBDImage is composed of two images, RGBDImage. tum. The sequence selected is the same as the one used to generate Figure 1 of the paper. txt at the end of a sequence, using the TUM RGB-D / TUM monoVO format ([timestamp x y z qx qy qz qw] of the cameraToWorld transformation). TE-ORB_SLAM2 is a work that investigate two different methods to improve the tracking of ORB-SLAM2 in. de. As an accurate 3D position track-ing technique for dynamic environment, our approach utilizing ob-servationality consistent CRFs can calculate high precision camera trajectory (red) closing to the ground truth (green) efficiently. Here, RGB-D refers to a dataset with both RGB (color) images and Depth images. The experiments are performed on the popular TUM RGB-D dataset . The ICL-NUIM dataset aims at benchmarking RGB-D, Visual Odometry and SLAM algorithms. Mainly the helpdesk is responsible for problems with the hard- and software of the ITO, which includes. in. RBG VPN Configuration Files Installation guide. TUMs lecture streaming service, currently serving up to 100 courses every semester with up to 2000 active students. +49. In these datasets, Dynamic Objects contains nine datasetsAS209335 - TUM-RBG, DE Note: An IP might be announced by multiple ASs. stereo, event-based, omnidirectional, and Red Green Blue-Depth (RGB-D) cameras. 5. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. 89. 它能够实现地图重用,回环检测. In case you need Matlab for research or teaching purposes, please contact support@ito. , illuminance and varied scene settings, which include both static and moving object. 80% / TKL Keyboards (Tenkeyless) As the name suggests, tenkeyless mechanical keyboards are essentially standard full-sized keyboards without a tenkey / numberpad. This project will be available at live. 2. 89 papers with code • 0 benchmarks • 20 datasets. de. cit. In the RGB color model #34526f is comprised of 20. de show that tumexam. The Private Enterprise Number officially assigned to Technische Universität München by the Internet Assigned Numbers Authority (IANA) is: 19518. Among various SLAM datasets, we've selected the datasets provide pose and map information. Next, run NICE-SLAM. rbg. TUM RGB-D Scribble-based Segmentation Benchmark Description. Attention: This is a live snapshot of this website, we do not host or control it! No direct hits. Ground-truth trajectories obtained from a high-accuracy motion-capture system are provided in the TUM datasets. The proposed DT-SLAM approach is validated using the TUM RBG-D and EuRoC benchmark datasets for location tracking performances. objects—scheme [6]. Qualitative and quantitative experiments show that our method outperforms state-of-the-art approaches in various dynamic scenes in terms of both accuracy and robustness. Fig. We also provide a ROS node to process live monocular, stereo or RGB-D streams. rbg. RGBD images. via a shortcut or the back-button); Cookies are. tum. Since we have known the categories. tum. - GitHub - raulmur/evaluate_ate_scale: Modified tool of the TUM RGB-D dataset that automatically computes the optimal scale factor that aligns trajectory and groundtruth. tum. tum. 2. 0. $ . 21 80333 München Tel. )We evaluate RDS-SLAM in TUM RGB-D dataset, and experimental results show that RDS-SLAM can run with 30. RGB-live. de 2 Toyota Research Institute, Los Altos, CA 94022, USA wadim. The results indicate that DS-SLAM outperforms ORB-SLAM2 significantly regarding accuracy and robustness in dynamic environments. I received my MSc in Informatics in the summer of 2019 at TUM and before that, my BSc in Informatics and Multimedia at the University of Augsburg. Note: All students get 50 pages every semester for free. employees/guests and hiwis have an ITO account and the print account has been added to the ITO account. DVO uses both RGB images and depth maps while ICP and our algorithm use only depth information. The format of the RGB-D sequences is the same as the TUM RGB-D Dataset and it is described here. The TUM RGBD dataset [10] is a large set of data with sequences containing both RGB-D data and ground truth pose estimates from a motion capture system. 1illustrates the tracking performance of our method and the state-of-the-art methods on the Replica dataset. We provide examples to run the SLAM system in the KITTI dataset as stereo or. Die beiden Stratum 2 Zeitserver wiederum sind Clients von jeweils drei Stratum 1 Servern, welche sich im DFN (diverse andere. TUM RGB-D Benchmark RMSE (cm) RGB-D SLAM results taken from the benchmark website. The results indicate that the proposed DT-SLAM (mean RMSE= 0:0807. This file contains information about publicly available datasets suited for monocular, stereo, RGB-D and lidar SLAM. Chao et al. TUM RGB-D dataset contains 39 sequences collected i n diverse interior settings, and provides a diversity of datasets for different uses. 18. Download 3 sequences of TUM RGB-D dataset into . Finally, run the following command to visualize. It also outperforms the other four state-of-the-art SLAM systems which cope with the dynamic environments. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. The number of RGB-D images is 154, each with a corresponding scribble and a ground truth image. ) Garching (on-campus), Main Campus Munich (on-campus), and; Zoom (online) Contact: Post your questions to the corresponding channels on Zulip. 0/16 (Route of ASN) PTR: griffon. public research university in Germany TUM-Live, the livestreaming and VoD service of the Rechnerbetriebsgruppe at the department of informatics and mathematics at the Technical University of MunichHere you will find more information and instructions for installing the certificate for many operating systems:. Year: 2009; Publication: The New College Vision and Laser Data Set; Available sensors: GPS, odometry, stereo cameras, omnidirectional camera, lidar; Ground truth: No The TUM RGB-D dataset [39] con-tains sequences of indoor videos under different environ-ment conditions e. We tested the proposed SLAM system on the popular TUM RGB-D benchmark dataset . We are happy to share our data with other researchers. Abstract-We present SplitFusion, a novel dense RGB-D SLAM framework that simultaneously performs. 01:00:00. Tumbuka language (ISO 639-2 and 639-3 language code tum) Tum, aka Toum, a variety of the. Here you can run NICE-SLAM yourself on a short ScanNet sequence with 500 frames. in. in. Major Features include a modern UI with dark-mode Support and a Live-Chat. Registrar: RIPENCC. SUNCG is a large-scale dataset of synthetic 3D scenes with dense volumetric annotations. r. This zone conveys a joint 2D and 3D information corresponding to the distance of a given pixel to the nearest human body and the depth distance to the nearest human, respectively. 02. Office room scene. de which are continuously updated. vehicles) [31]. Attention: This is a live. The proposed DT-SLAM approach is validated using the TUM RBG-D and EuRoC benchmark datasets for location tracking performances. Ground-truth trajectories obtained from a high-accuracy motion-capture system are provided in the TUM datasets. SLAM. Living room has 3D surface ground truth together with the depth-maps as well as camera poses and as a result pefectly suits not just for bechmarking camera. Estimating the camera trajectory from an RGB-D image stream: TODO. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. However, most visual SLAM systems rely on the static scene assumption and consequently have severely reduced accuracy and robustness in dynamic scenes. Simultaneous localization and mapping (SLAM) is one of the fundamental capabilities for intelligent mobile robots to perform state estimation in unknown environments. /build/run_tum_rgbd_slam Allowed options: -h, --help produce help message -v, --vocab arg vocabulary file path -d, --data-dir arg directory path which contains dataset -c, --config arg config file path --frame-skip arg (=1) interval of frame skip --no-sleep not wait for next frame in real time --auto-term automatically terminate the viewer --debug debug mode -. mine which regions are static and dynamic relies only on anIt can effectively improve robustness and accuracy in dynamic indoor environments. Traditional visual SLAM algorithms run robustly under the assumption of a static environment, but always fail in dynamic scenarios, since moving objects will impair camera pose tracking. Our approach was evaluated by examining the performance of the integrated SLAM system. $ . The energy-efficient DS-SLAM system implemented on a heterogeneous computing platform is evaluated on the TUM RGB-D dataset . The ground-truth trajectory was Dataset Download. cpp CMakeLists. 5. This dataset is a standard RGB-D dataset provided by the Computer Vision Class group of Technical University of Munich, Germany, and it has been used by many scholars in the SLAM.