Best way to store large csv files with a git project

Hey I have another question regarding this one: Why not to use git for large files?. One user in the comment section mentions that git LFS is not really made to store large csv files. So now my question is, what would be a better way? Should I maybe just use git? But what about the problems with large files in history then?


For large CVS file, you can replicate what Git LFS does, and add your own smudge/clean content filter driver

The smudge script would, on checkout, fetch your CVS file from an external storage (for instance S3, as mentioned in the comments).

The clean script would, on commit, check if the file has changed, and upload it back, which should not happen often according to you.

That way, you avoid keeping a large text file in your Git repo.

链接地址: http://www.djcxy.com/p/24518.html

上一篇: 你能设置文件出现在“git”命令上的顺序吗?

下一篇: 使用git项目存储大型csv文件的最佳方法