What are the best Haskell libraries to operationalize a program?

If I'm going to put a program into production, there are several things I need that program to do in order to consider it "operationalized" – that is, running and maintainable in a measurable and verifiable way by both engineers and operations staff. For my purposes, an operationalized program must:

  • Be able to log at multiple levels (ex: debug, warning, etc.).
  • Be able to collect and share metrics/statistics about the types of work the program is doing and how long that work is taking. Ideally, the collected metrics are available in a format that's compatible with commonly-used monitoring tools like Ganglia, or can be so munged.
  • Be configurable, ideally via a system that allows configured properties in running programs to be updated without restarting said programs.
  • Be deployable to remote servers in a repeatable way.
  • In the Scala world, there are good libraries for dealing with at least the first three requirements. Examples:

  • Logula for logging.
  • Metrics or Ostrich for collecting and reporting metrics.
  • Configgy or Fig for configuration.
  • As for deployment, one approach taken in the Scala world is to bundle together the bytecode and libraries that comprise one's program with something like assembly-sbt, then push the resulting bundle (a "fat JAR") to remote servers with a tool like Capistrano that executes commands in parallel over SSH. This isn't a problem that necessitates language-specific tools, but I'm curious if such a tool exists in the Haskell community.

    There are probably Haskell libraries that provide the traits I've described above. I'd like to know which of the available libraries are considered "best"; that is, which are most mature, well-maintained, commonly used in the Haskell community, and exemplary of Haskell best practices.

    If there are any other libraries, tools, or practices around making Haskell code "production-ready", I'd love to know about those as well.


    This is a great question! Here's a first cut.

    Be able to log at multiple levels (ex: debug, warning, etc.).

    hslogger is easily the most popular logging framework.

    Be able to collect and share metrics/statistics about the types of work the program is doing and how long that work is taking. Ideally, the collected metrics are available in a format that's compatible with commonly-used monitoring tools like Ganglia, or can be so munged.

    I'm not aware of any standardized reporting tools, however, extracting reports from +RTS -s streams (or via profiling output flags) has been something I've done in the past.

    $ ./A +RTS -s
    64,952 bytes allocated in the heap
    1 MB total memory in use
     %GC time       0.0%  (6.1% elapsed)
     Productivity 100.0% of total user, 0.0% of total elapsed
    

    You can get this in machine-readable format too:

    $ ./A +RTS -t --machine-readable
    
     [("bytes allocated", "64952")
     ,("num_GCs", "1")
     ,("average_bytes_used", "43784")
     ,("max_bytes_used", "43784")
     ,("num_byte_usage_samples", "1")
     ,("peak_megabytes_allocated", "1")
     ,("init_cpu_seconds", "0.00")
     ,("init_wall_seconds", "0.00")
     ,("mutator_cpu_seconds", "0.00")
     ,("mutator_wall_seconds", "0.00")
     ,("GC_cpu_seconds", "0.00")
     ,("GC_wall_seconds", "0.00")
     ]
    

    Ideally you could attach to a running GHC runtime over a socket and look at these GC stats interactively, but currently that's not super easy (needs an FFI bindings to the "rts/Stats.h" interface). You can attach to a process using ThreadScope and monitor GC and threading behavior.

    Similar flags are available for incremental, logged time and space profiling, which can be used for monitoring (eg these graphs can be built incrementally).

    hpc collects a lot of statistics about program execution, via its Tix type, and people have written tools to log by time-slice what code is executing.

    Be configurable, ideally via a system that allows configured properties in running programs to be updated without restarting said programs.

    Several tools are available for this, you can do xmonad-style state reloading; or move up to code hotswapping via plugins * packages or hint . Some of these are more experimental than others.

    Reproducible deployments

    Galois recently released cabal-dev , which is a tool for doing reproducible builds (ie dependencies are scoped and controlled).


  • Regarding configuration, I've found ConfigFile to be useful for my projects. I use it for all my daemons in production. It doesn't update automatically.
  • I use cabal-dev for creating reproducible builds across environments (local, dev, colleague-local). Really cabal-dev is indispensable, especially for its ability to support local, patched versions of libraries within the project directory.
  • For what it's worth, I would go with xmonad-style state reloading. Purity of Haskell makes this trivial; migration is an issue but it is anyway. I experimented with hsplugins and hint for my IRCd and in the former case there was a GHC runtime problem, and in the latter a segmentation fault. I left the branches on Github for later postmortem: https://github.com/chrisdone/hulk
  • Example of ConfigFile:

    # Default options
    [DEFAULT]
    hostname: localhost
    # Options for the first file
    [file1]
    location: /usr/local
    user: Fred
    

    I would echo everything Don said and add a few general bits of advice.

    For example, two additional tools and libraries you might want to consider:

  • QuickCheck for property based testing
  • hlint as an extended version of -Wall
  • Those are both targeted at code quality.

    As a coding practice, avoid Lazy IO. If you need streaming IO, then go with one of the iteratee libraries such as enumerator. If you look on Hackage you'll see libraries like http-enumerator that use an enumerator style for http requests.

    As for picking libraries on hackage it can sometimes help to look at how many packages depend on something. Easily see the reverse dependencies of a package you can use this website, which mirrors hackage:

  • http://bifunctor.homelinux.net/~roel/hackage/packages/archive/revdeps-list.html
  • If your application ends up doing tight loops, like a web server handling many requests, laziness can be an issue in the form of space leaks. Often this is a matter of adding strictness annotations in the right places. Profiling, experience, and reading core are the main techniques I know of for combating this sort of thing. The best profiling reference I know of is Chapter 25 of Real-World Haskell.

    链接地址: http://www.djcxy.com/p/42864.html

    上一篇: 在OCaml中设计大型项目

    下一篇: 什么是最好的Haskell库来操作一个程序?