riを全文検索する


まず、3つのパッケージを導入します.
 
  require 'rdoc/ri/ri_paths'
  require 'find'
  require 'yaml'

 
具体的な実装:
 
  def full_text_search(text)
    puts "Searching for #{text}"
    puts
    dirs = RI::Paths::PATH
    dirs.each do |dir|
      Dir.chdir dir do
        Find.find('.') do |path|
          next unless test ?f, path
          yaml = File.read path
          if yaml =~ /#{text}/io then
            full_name = $1 if yaml[/full_name: (.*)/]
            puts "** FOUND IN: #{full_name}"
            data = YAML.load yaml.gsub(/ \!.*/, '')
            desc = data['comment'].map { |x| x.values }.flatten.join("
").gsub(/&quot;/, "'").gsub(/&lt;/, "<").gsub(/&gt;/, ">").gsub(/&amp;/, "&") puts puts desc puts end end end end end

 
例:
 
    full_text_search("duplicate")

 
出力例:
 
Searching for duplicate

** FOUND IN: Zlib::Inflate

Zlib:Inflate is the class for decompressing compressed data. Unlike Zlib::Deflate, an instance of this class is not able to duplicate (clone, dup) itself.

** FOUND IN: Zlib::Deflate#initialize_copy

Duplicates the deflate stream.

** FOUND IN: StringScanner#initialize_copy

Duplicates a StringScanner object.

** FOUND IN: String#tr_s

Processes a copy of <em>str</em> as described under <tt>String#tr</tt>, then removes duplicate characters in regions that were affected by the translation.
   'hello'.tr_s('l', 'r')     #=> 'hero'
   'hello'.tr_s('el', '*')    #=> 'h*o'
   'hello'.tr_s('el', 'hx')   #=> 'hhxo'

** FOUND IN: Set

Set implements a collection of unordered values with no duplicates. This is a hybrid of Array's intuitive inter-operation facilities and Hash's fast lookup.
Several methods accept any Enumerable object (implementing <tt>each</tt>) for greater flexibility: new, replace, merge, subtract, |, &, -, ^.
The equality of each couple of elements is determined according to Object#eql? and Object#hash, since Set uses Hash as storage.
Finally, if you are using class Set, you can also use Enumerable#to_set for convenience.
Example
2
  require 'set'
  s1 = Set.new [1, 2]                   # -> #<Set: {1, 2}>
  s2 = [1, 2].to_set                    # -> #<Set: {1, 2}>
  s1 == s2                              # -> true
  s1.add('foo')                         # -> #<Set: {1, 2, 'foo'}>
  s1.merge([2, 6])                      # -> #<Set: {6, 1, 2, 'foo'}>
  s1.subset? s2                         # -> false
  s2.subset? s1                         # -> true

** FOUND IN: REXML::Parsers::SAX2Parser#get_procs

The following methods are duplicates, but it is faster than using a helper

** FOUND IN: REXML::Parent#deep_clone

Deeply clones this object. This creates a complete duplicate of this Parent, including all descendants.

** FOUND IN: REXML::Comment::new

Constructor. The first argument can be one of three types: @param first If String, the contents of this comment are set to the argument. If Comment, the argument is duplicated. If Source, the argument is scanned for a comment. @param second If the first argument is a Source, this argument should be nil, not supplied, or a Parent to be set as the parent of this object

** FOUND IN: OpenStruct#initialize_copy

Duplicate an OpenStruct object members.

** FOUND IN: Object#dup

Produces a shallow copy of <em>obj</em>---the instance variables of <em>obj</em> are copied, but not the objects they reference. <tt>dup</tt> copies the tainted state of <em>obj</em>. See also the discussion under <tt>Object#clone</tt>. In general, <tt>clone</tt> and <tt>dup</tt> may have different semantics in descendent classes. While <tt>clone</tt> is used to duplicate an object, including its internal state, <tt>dup</tt> typically uses the class of the descendent object to create the new instance.
This method may have class-specific behavior. If so, that behavior will be documented under the #<tt>initialize_copy</tt> method of the class.

** FOUND IN: Net::HTTPHeader#get_fields

[Ruby 1.8.3] Returns an array of header field strings corresponding to the case-insensitive <tt>key</tt>. This method allows you to get duplicated header fields without any processing. See also #[].
  p response.get_fields('Set-Cookie')
    #=> ['session=al98axx; expires=Fri, 31-Dec-1999 23:58:23',
         'query=rubyscript; expires=Fri, 31-Dec-1999 23:58:23']
  p response['Set-Cookie']
    #=> 'session=al98axx; expires=Fri, 31-Dec-1999 23:58:23, query=rubyscript; expires=Fri, 31-Dec-1999 23:58:23'

** FOUND IN: Hash#update

Adds the contents of <em>other_hash</em> to <em>hsh</em>, overwriting entries with duplicate keys with those from <em>other_hash</em>.
   h1 = { 'a' => 100, 'b' => 200 }
   h2 = { 'b' => 254, 'c' => 300 }
   h1.merge!(h2)   #=> {'a'=>100, 'b'=>254, 'c'=>300}

** FOUND IN: Hash#store

Element Assignment---Associates the value given by <em>value</em> with the key given by <em>key</em>. <em>key</em> should not have its value changed while it is in use as a key (a <tt>String</tt> passed as a key will be duplicated and frozen).
   h = { 'a' => 100, 'b' => 200 }
   h['a'] = 9
   h['c'] = 4
   h   #=> {'a'=>9, 'b'=>200, 'c'=>4}

** FOUND IN: Hash#merge

Returns a new hash containing the contents of <em>other_hash</em> and the contents of <em>hsh</em>, overwriting entries in <em>hsh</em> with duplicate keys with those from <em>other_hash</em>.
   h1 = { 'a' => 100, 'b' => 200 }
   h2 = { 'b' => 254, 'c' => 300 }
   h1.merge(h2)   #=> {'a'=>100, 'b'=>254, 'c'=>300}
   h1             #=> {'a'=>100, 'b'=>200}

** FOUND IN: Hash#merge!

Adds the contents of <em>other_hash</em> to <em>hsh</em>, overwriting entries with duplicate keys with those from <em>other_hash</em>.
   h1 = { 'a' => 100, 'b' => 200 }
   h2 = { 'b' => 254, 'c' => 300 }
   h1.merge!(h2)   #=> {'a'=>100, 'b'=>254, 'c'=>300}

** FOUND IN: Hash#[]=

Element Assignment---Associates the value given by <em>value</em> with the key given by <em>key</em>. <em>key</em> should not have its value changed while it is in use as a key (a <tt>String</tt> passed as a key will be duplicated and frozen).
   h = { 'a' => 100, 'b' => 200 }
   h['a'] = 9
   h['c'] = 4
   h   #=> {'a'=>9, 'b'=>200, 'c'=>4}

** FOUND IN: Array#uniq

Returns a new array by removing duplicate values in <em>self</em>.
   a = [ 'a', 'a', 'b', 'b', 'c' ]
   a.uniq   #=> ['a', 'b', 'c']

** FOUND IN: Array#uniq!

Removes duplicate elements from <em>self</em>. Returns <tt>nil</tt> if no changes are made (that is, no duplicates are found).
   a = [ 'a', 'a', 'b', 'b', 'c' ]
   a.uniq!   #=> ['a', 'b', 'c']
   b = [ 'a', 'b', 'c' ]
   b.uniq!   #=> nil

** FOUND IN: Array#|

Set Union---Returns a new array by joining this array with other_array, removing duplicates.
   [ 'a', 'b', 'c' ] | [ 'c', 'd', 'a' ]
          #=> [ 'a', 'b', 'c', 'd' ]

** FOUND IN: Array#&

Set Intersection---Returns a new array containing elements common to the two arrays, with no duplicates.
   [ 1, 1, 3, 5 ] & [ 1, 2, 3 ]   #=> [ 1, 3 ]