I have very little programming experience but I am interested in learning. I have an idea that I want to work on but I don't know how to approach it. If anyone could help in any way I would really appreciate it.
Basic Idea:
How feasible would it be to make some type of software that fits multiple Kinect 3d point cloud captures of an environment from multiple angles together based on matching geometries of the point clouds? The idea is to take multiple point clouds and have software that searches for similarities between them by comparing them and rotating them around in 3 space. I am thinking that in this way you could just walk all around pointing the capture device in all sorts of directions and then have some sort of algorithm which fits it all together into one super detailed 3d render.
Does anyone have any suggestions on how I would approach this?
Thanks!